sentences
sequence | labels
sequence |
---|---|
[
"Continual relation learning aims to continually train a model on new data to learn incessantly emerging novel relations while avoiding catastrophically forgetting old relations.",
"Some pioneering work has proved that storing a handful of historical relation examples in episodic memory and replaying them in subsequent training is an effective solution for such a challenging problem.",
"However, these memory-based methods usually suffer from overfitting the few memorized examples of old relations, which may gradually cause inevitable confusion among existing relations.",
"Inspired by the mechanism in human long-term memory formation, we introduce episodic memory activation and reconsolidation (EMAR) to continual relation learning.",
"Every time neural models are activated to learn both new and memorized data, EMAR utilizes relation prototypes for memory reconsolidation exercise to keep a stable understanding of old relations.",
"The experimental results show that EMAR could get rid of catastrophically forgetting old relations and outperform the state-of-the-art continual learning models.",
"The code and datasets are released on https://github.com/thunlp/ ContinualRE .",
"Relation extraction aims at detecting relations between entities from text, e.g., extracting the relation the president of from the given sentence Newton served as the president of the Royal Society , which could serve as external resource for various downstream applications (Dong et al., 2015; Xiong et al., 2017; Schlichtkrull et al.,",
"2018).",
"The conventional RE methods (Riedel et al., 2013; Zeng et al., 2014; Lin et al., 2016) mostly focus on recognizing relations for a fixed pre-defined relation set, and cannot handle rapidly emerging novel relations in the real world.",
"Some researchers therefore explore to detect and learn incessantly emerging relations in an open scenario.",
"As shown in Figure 1, their efforts can be formulated into a two-step pipeline: (1) Open Relation Learning extracts phrases and arguments to construct patterns of specific relations, and then discovers unseen relation types by clustering patterns, and finally expands sufficient examples of new relation types from large-scale textual corpora; (2) Continual Relation Learning continually uses those expanded examples of new relations to train an effective classifier.",
"The classifier is trained on a sequence of tasks for handling both existing and novel relations, where each task has its own relation set.",
"Although continual relation learning is vital for learning emerging relations, there are rare explorations for this field.",
"A straightforward solution is to store all historical data and re-train models every time new relations and examples come in.",
"Nevertheless, it is computationally expensive since relations are in sustainable growth.",
"Moreover, the huge example number of each relation makes frequently mixing new and old examples become infeasible in the real world.",
"Therefore, storing all data is not practical in continual relation learning.",
"In view of this, the recent preliminary work (Wang et al., 2019) indicates that the main challenge of continual relation learning is the catastrophic forgetting problem, i.e., it is hard to learn new relations and meanwhile avoid forgetting old relations, considering memorizing all the data is almost impossible.",
"Figure 1 : The whole pipeline to detect and learn new relations in an open scenario.",
"Recent work (Shin et al., 2017; Kemker and Kanan, 2018; Chaudhry et al., 2019) has shown that the memory-based approaches, maintaining episodic memory to save a few training examples in old tasks and re-training memorized examples during training new tasks, are one of the most effective solutions to the catastrophic forgetting problem, especially for continual learning in NLP scenarios (Wang et al., 2019; d'Autume et al., 2019).",
"However, existing memory-based models still suffer from an overfitting problem: when adapting them for continual relation learning, they may frequently change feature distribution of old relations, gradually overfit a few examples in memory, and finally become confused among old relations after long-term training.",
"In fact, these memory-based methods are similar to long-term memory model of mammalian memory in neuroscience (McClelland et al., 1995; Bontempi et al., 1999).",
"Although researchers in neuroscience are not clear about secrets inside the human brain, they reach a consensus that the formation of long-term memory relies on continually replaying and consolidating information (Tononi and Cirelli, 2006; Boyce et al., 2016; Yang et al., 2014), corresponding to the episodic memory and memory replay in continual learning models.",
"Yet later work (Nader et al., 2000; Lee et al., 2004; Alberini, 2005) in neuroscience indicates that reactivation of consolidated memory triggers a reconsolidation stage to continually maintain memory, and memory is easy to be changed or erased in this stage.",
"To apply some reconsolidation exercises can help memory go through this stage and keep long-term memory stable.",
"Intuitively, the existing memory-based models seem like continual memory activation without reconsolidation exercises, and thus become sensitive and volatile.",
"Inspired by the reconsolidation mechanism in human long-term memory formation, we introduce episodic memory activation and reconsolidation (EMAR) to continual relation learning in this paper.",
"More specifically, when training models on new relations and their examples, we first adopt memory replay to activate neural models on examples of both new relations and memory, and then utilize a special reconsolidation module to let models avoid excessively changing and erasing feature distribution of old relations.",
"As the core of relation learning is to grasp relation prototypes rather than rote memorization of relation examples, our reconsolidation module requires models to be able to distinguish old relation prototypes after each time memory is replayed and activated.",
"As compared with pioneering explorations to improve episodic memory replay (Chaudhry et al., 2019; Wang et al., 2019), with toughly keeping feature distribution of old relations invariant, EMAR is more flexible in feature spaces and powerful in remembering relation prototypes.",
"We conduct sufficient experiments on several RE datasets, and the results show that EMAR effectively alleviates the catastrophic forgetting problem and significantly outperforms the state-of-the-art continual learning models.",
"Further experiments and analyses indicate the reasons for the effectiveness of EMAR, proving that it can utilize a few examples in old tasks to reconsolidate old relation prototypes and keep better distinction among old relations after long-term training.",
"The conventional RE work, including both supervised RE models (Zelenko et al., 2003; Zhou et al., 2005; Gormley et al., 2015; Socher et al., 2012; Liu et al., 2013; Zeng et al., 2014; Nguyen and Grishman, 2015; dos Santos et al., 2015; Xu et al., 2015; Liu et al., 2015; Miwa and Bansal, 2016) and distantly supervised models (Bunescu and Mooney, 2007; Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Zeng et al., 2015; Lin et al., 2016; Han et al., 2018a; Baldini Soares et al., 2019), focuses on extracting pre-defined relations from text.",
"Yet in the real world, new relations are rapidly emerging, and it is impossible to train models with a fixed dataset once to cover all relations.",
"Hence, some researchers pay their attention to relation learning in various open scenarios, in order to detect and learn relations without pre-defined relation sets.",
"As we introduced before, learning incessantly emerging relations consists of two important steps: open relation learning and continual relation learning.",
"There have been many efforts for open relation learning, including pattern extraction (Banko et al., 2007; Fader et al., 2011; Mausam et al., 2012; Del Corro and Gemulla, 2013; Angeli et al., 2015; Petroni et al., 2015; Stanovsky and Dagan, 2016; Mausam, 2016; Cui et al., 2018), relation discovery (Yao et al., 2011; Marcheggiani and Titov, 2016), relation clustering (Shinyama and Sekine, 2006; Elsahar et al., 2017; Wu et al., 2019), and data collection (Riloff et al., 1999; Et-zioni et al., 2005; Pantel and Pennacchiotti, 2006; Rozenfeld and Feldman, 2008; Nakashole et al., 2011; Zhu et al., 2009; Gao et al., 2020).",
"However, for continual relation learning, there are still only some preliminary explorations for it.",
"Following continual learning setting 1 (Ring, 1994; Thrun and Pratt, 2012) in machine learning, Wang et al. (2019) first explore continual relation learning.",
"Existing continual learning methods focus on three research directions: (1) consolidation-based methods (Kirkpatrick et al., 2017; Zenke et al., 2017; Li and Hoiem, 2017; Liu et al., 2018; Ritter et al., 2018) which consolidate the model parameters important to previous tasks and reduce their learning weights; (2) dynamic architecture methods (Chen et al., 2016; Rusu et al., 2016; Fernando et al., 2017) which dynamically expand model architectures to learn new tasks and ef-1 Some work names it lifelong or incremental learning.",
"fectively prevent forgetting old tasks.",
"Yet model size growing dramatically with increasing tasks makes these methods unsuitable for NLP applications; (3) memory-based methods (Lopez-Paz and Ranzato, 2017; Rebuffi et al., 2017; Shin et al., 2017; Kemker and Kanan, 2018; Aljundi et al., 2018; Chaudhry et al., 2019) remember a few examples in old tasks and continually learn them with emerging new tasks to alleviate catastrophic forgetting.",
"Among these methods, the memory-based methods have been proven to be the most promising for NLP tasks, including both relation learning (Wang et al., 2019) and other NLP tasks (d'Autume et al., 2019; Sun et al., 2019).",
"Inspired by reconsolidation in human memory formation, we introduce episodic memory activation and reconsolidation (EMAR) to alleviate the overfitting problem of the existing memory-based methods and better learn relations continually.",
"Continual relation learning trains models on a sequence of tasks, where the k -th task has its own training set T k , validation set V k , and query set Q k .",
"Each set of the k -th task, e.g. T k = { ( x T k 1 , y T k 1 ) , . . . , ( x T k N , y T k N ) } , consists of a series of examples and their corresponding relation labels, where N is the example number of T k .",
"Each example x T k i and its label y T k i indicate that x T k i can express the relation y T k i R k , where R k is the relation set of the k -th task.",
"More specifically, models will be trained on T k at the k -th step to learn the new relations in R k .",
"As relations are emerging and accumulating, continual relation learning requires models to perform well on both the k -th task and previous k 1 tasks.",
"Hence, after training on T k , models will be evaluated on Q k = (cid:83) ki =1 Q i , and required to classify each query example into the all known relation set R k = (cid:83) ki =1 R i .",
"Therefore, the evaluation will be more and more difficult with the growth of tasks.",
"For handling the catastrophic forgetting in continual relation learning, an episodic memory module M = {M 1 , M 2 , . . . } is set to store a few examples of historical tasks, each memory module M k = { ( x M k 1 , y M k 1 ) , . . . , ( x M k B , y M k B ) } stores several examples and labels that come from T k , where ( x M k i , y M k i ) T k and B is the constrained memory size for each task.",
"As shown in Figure 2, when models are trained Data for Relation C Data in Memory Data for Activation Prototype Set Instance Set Select Combine Sample E L E P E L E L Learning Computing Prototypes Replay & Activation Reconsolidation Learn Relation A Learn Relation B Learn Relation C Learn Relation DP Prototypes E Encoder L Loss Figure 2 : A simple example of continually learning four tasks (each task has only one relation: A, B, C, D respectively) to demonstrate the overall framework of episodic memory activation and reconsolidation during continual relation learning.",
"The purple solid lines and dotted lines represent the forward and backward propagation respectively.",
"The black dotted lines represent the data flow.",
"on the k -th task, our framework includes several steps to learn new relations and meanwhile avoid forgetting old relations: (1) First (Section 3.3), we fine-tune the example encoder on the training set T k of the k -th task to let the model be aware of new relation patterns.",
"(2) Second (Section 3.4), for each relation in the k -th relation set R k , we select its informative examples and store the examples into the episodic memory M k .",
"(3) Finally (Section 3.5), we iteratively adopt memory replay and activation as well as memory reconsolidation to learn new relation prototypes while strengthening distinguishing old relation prototypes.",
"Besides, we will introduce how to train models as well as predict relations for query examples in Section 3.6.",
"As the example encoder is used in all other steps, we first introduce it in Section 3.2 before other steps.",
"Given an example x , we adopt an example encoder to encode its semantic features for detecting and learning relations.",
"To be specific, we first tokenize the given example into several tokens, and then input the tokenized tokens into neural networks to compute its corresponding embedding.",
"As extracting relations from sentences is related to those entities mentioned in sentences, we thus add special tokens into the tokenized tokens to indicate the beginning and ending positions of those entities.",
"For simplicity, we denote such an example encoding operation as the following equation, x = f ( x ) , (1) where x R d is the semantic embedding of x , and d is the embedding dimension.",
"Note that the encoder is not our focus in this paper, we select bidirectional long short-term memory (BiL-STM) (Bengio et al., 1994) as representative encoders to encode examples.",
"In fact, other neural text encoders like convolutional neural networks (Zeng et al., 2014) and pre-trained language models (Devlin et al., 2019) can also be adopted as example encoders.",
"When the k -th task is arising, the example encoder has not touched any examples of new relations before, and cannot extract the semantic features of them.",
"Hence, we first fine-tune the example encoder on T k = { ( x T k 1 , y T k 1 ) , . . . , ( x T k N , y T k N ) } to grasp new relation patterns in R k .",
"The loss function of learning the k -th task is as follows, L ( ) = N (cid:88) i =1 | R k | (cid:88) j =1 y T k i = r j log exp( g ( f ( x T k i ) , r j )) (cid:80) | R k | l =1 exp( g ( f ( x T k i ) , r l )) , (2) where r j is the embedding of the j -th relation r j R k in the all known relation set R k , g ( , ) is the function to compute similarities between embeddings (e.g. cosine similarity), and is the parameters that can be optimized, including the example encoder parameters and relation embeddings.",
"If y T k i equals r j , y T k i = r j = 1 , otherwise y T k i = r j = 0 .",
"For each new relation, we first randomly initialize its embedding and then optimize Eq.",
"(2).",
"After several epochs of learning for new tasks with Eq.",
"(2), we store a few examples from T k into the memory M k .",
"More specifically, we select informative and diverse examples from T k to cover new relation patterns as much as possible, which can make the memory effectively approximate the feature distribution of relations.",
"After encoding all examples of the k -th task T k into { x T k 1 , . . . , x T k N } , we apply K-Means to cluster these example embeddings, where the number of clusters is the memory size B .",
"Then, for each cluster, we select the example closest to the cluster centroid and record which relation these selected examples belong to.",
"We denote this selected example set C k .",
"By counting the example number in C k for each relation, we can describe the relation importance in this task: more selected examples of a relation indicates more importance.",
"As the limited memory size, for those more important relations, we select at least (cid:98) B |R k | (cid:99) examples, yet for those less important ones, we select at most (cid:100) B |R k | (cid:101) examples.",
"If a relation does not have enough examples to fill its allocated memory, this memory will be re-allocated for other relations.",
"For each relation, we also use K-Means to cluster its own examples, and the number of current clusters is its allocated example number in the memory.",
"For each cluster, we select the example closest to the cluster centroid, and store this example into the memory M k .",
"After fine-tuning the example encoder for T k and selecting informative examples for M k , we iteratively adopt computing prototypes , memory replay and activation , and memory reconsolidation to strengthen identifying new relation patterns and keep distinguishing old relation patterns.",
"By combining all examples in the episodic memory, we achieve the whole memory set M k = (cid:83) ki =1 M i .",
"As we aim to grasp relation prototypes rather than rote memorization of relation examples, for each known relation r i R k , we sample a prototype set P i = { x P i 1 , . . . , x P i |P i | } , where each example x P i i comes from M k and its label equals r i , and compute its prototype embedding, p i = (cid:80) |P i | j =1 f ( x P i j ) |P i | , (3) where p i is the relation prototype embedding of r i R k .",
"In memory replay and activation, the whole memory set M k and the k -th training set T k will be combined into an activation set A k = M k T k = { ( x A k 1 , y A k 1 ) , . . . , ( x A k M , y A k M ) } to continually activate models to learn new relations and remember old relations, where M is the total example number of both M k and T k .",
"The loss function is LA ( ) = M (cid:88) i =1 | R k | (cid:88) j =1 y A k i = r j log exp( g ( f ( x A k i ) , r j )) (cid:80) | R k | l =1 exp( g ( f ( x A k i ) , r l )) .",
"As we mentioned before, just conducting memory replay and activation will lead to the overfitting problem, and in the end, models only remember a handful of memorized examples after long-term training.",
"Meanwhile, the core of learning relations is to grasp relation prototypes rather than rote memorization of relation examples.",
"Hence, every time conducting memory replay and activation to grasp both new and old relations, we adopt a memory reconsolidation module to strengthen this process, which seems like conducting reconsolidation exercises to keep long-term memory stable in the human brain.",
"For each known relation r i R k , we sample its instance set I i = { x I i 1 , . . . , x I i |I i | } as is similar to sampling P i , where each example x I i i I i also comes from M k and its label equals r i .",
"The loss function of the memory reconsolidation is LR ( ) = | R k | (cid:88) i =1 |I i | (cid:88) j =1 log exp( g ( f ( x I i j ) , p i )) (cid:80) | R k | l =1 exp( g ( f ( x I i j ) , p l )) , (5) where p l is the relation prototype embedding of r l R k computed by Eq.",
"(3).",
"For training the k -th task, we first use L ( ) to optimize parameters for several epochs.",
"Then, we select examples for the memory, and iteratively optimize parameters with LA ( ) and LR ( ) until convergence.",
"More details about the training process are shown in Algorithm 1.",
"After finishing the k -th task, for each known relation r i R k , we collect all its memorized examples E i = { x E i 1 , . . . , x E i S } in the whole memory M k , where S is the example number of r i in the memory, and compute final relation prototype for prediction, p i = r i + (cid:80) Sj =1 f ( x E i j ) 1 + S , (6) where r i is the relation embedding of r i used in Eq.",
"(2) and Eq.",
"(4).",
"For each query example x in Q k , we define its score function for the relation r i : s ( x, r i ) = g ( f ( x ) , p i ) , (7) where p i is the final prototype of the relation r i computed by Eq.",
"(6).",
"Finally, the prediction y for the query x is calculated by: y = arg max r i R k s ( x, r i ) .",
"(1) FewRel (Han et al., 2018b).",
"FewRel is a RE dataset that contains 80 relations and 56 , 000 examples in total.",
"We follow the settings from Wang et al. (2019) to make FewRel a continual learning benchmark: FewRel is split into 10 clusters of relations, leading to 10 tasks and each relation just belongs to only one task.",
"Each example in these tasks is related to a relation and a candidate set of 10 randomly selected relations for evaluation.",
"(2) SimpleQuestions (Bordes et al., 2015).",
"SimpleQuestions (SimpleQ) is a knowledge base question answering dataset that contains 108 , 442 questions, and Yu et al. (2017) construct a relation detection dataset based on it, where questions are linked to relations.",
"Like FewRel, we follow the settings from Wang et al. (2019): SimpleQ is split into 20 clusters of relations to construct 20 tasks.",
"As each question in SimpleQ has been related to a candidate set for evaluation, we do not randomly sample candidate sets again for SimpleQ.",
"(3) TACRED (Zhang et al., 2017).",
"TACRED is a RE dataset that contains 42 relations and 21 , 784 examples.",
"Similar to FewRel, we also split TACRED into 10 clusters of relations to construct 10 tasks, and randomly sample candidate relation sets consisting of 10 relations for each examples.",
"Considering there is a special relation n/a (not available) in TACRED, we filter out these examples with the relation n/a and use the left examples for continual TACRED.",
"We use two evaluation settings including whole performance , which calculates the accuracy on the whole test set of all tasks, and average performance , which averages the accuracy on all seen tasks.",
"After having seen all tasks, we use the final whole performance and average performance to evaluate the overall performance of continual relation learning.",
"As average performance highlights the performance of handling catastrophic problem, and thus it is the main metric to evaluate models.",
"As the task sequence has influence on final model performance, we implement the baseline models by ourselves based on the toolkit 2 released by Wang et al. (2019).",
"For fair comparison, we unify the random seeds in our experiments completely consistent with the seeds in Wang et al. (2019), so that the task sequence can be completely consistent with Wang et al. (2019).",
"For other settings, such as hidden embedding dimension and pre-trained input embeddings, we also follow the settings in Wang et al. (2019).",
"We evaluate our model and several baselines on the benchmarks, and select two theoretical models to measure the lower and upper bounds: (1) Lower Bound , which continually fine-tunes models for each new task without memorizing any historical examples; (2) Upper Bound , which remembers all examples in history and continually re-train models with all data.",
"In fact, this model serves as the ideal upper bound for the performance of continual relation learning; (3) EWC (Kirkpatrick et al., 2017), which adopts elastic weight consolidation to add special L 2 regularization on parameter changes.",
"Then, EWC uses Fisher information to measure the parameter importance to old tasks, and slow down the update of those parameters important to old tasks; (4) EMR (Parisi et al., 2019), a basic memory-based method, which memorizes a few historical examples and simply conduct memory replay.",
"Every time a new task comes in, EMR mixes memorized examples and new examples together to fine-tune models; (5) GEM (Lopez-Paz and Ranzato, 2017), an extension of EMR, which adds a constraint on directions of new gradients to make sure that optimization directions do not conflict with gradients on old tasks; (6) AGEM",
"(Chaudhry et al., 2019), the extension of GEM, which takes the gradient on sampled memorized examples from memory as the only constraint on the optimization directions of the current task; (7) EA-EMR (Wang et al., 2019), which introduces memory replay and embedding aligned mechanism to enhance previous tasks and mitigate the embedding distortion when trained on new tasks.",
"EA-EMR is also an extension of EMR, and the state-of-the-art on continual relation learning.",
"Table 1 shows the overall performance on three benchmarks under two different settings.",
"From the table, we can see that (1) our proposed EMAR significantly outperforms other baselines and achieves state-of-the-arts almost in all settings.",
"On the SimpleQ dataset, the performance of EMAR is close to EA-EMR and EMR.",
"The reason is perhaps that the SimpleQ benchmark is over simple (even the weakest Lower Bound achieves relatively high results close to Upper Bound).",
"On other benchmarks, EMAR outperforms all the baseline models with a large margin, showing the superiority of our proposed episodic memory activation and reconsolidation mechanism.",
"(2) There is still a huge gap between our model and the upper bound.",
"It indicates there remains lots of things to be explored in continual relation learning.",
"To further investigate how accuracy changes while learning new tasks, we show the average performance of models at each step in Figure",
"3. As shown in the figure, we can observe that: (1) With increasing numbers of tasks, the performance of all the models decreases in some degree.",
"This indicates that catastrophically forgetting old relations is inevitable, and it is indeed one of the major difficulty for continual relation learning.",
"(2) The memory-based methods significantly outperform the consolidation-based method, which demonstrates the memory-based methods could alleviate the problem of catastrophic forgetting to some extent.",
"(3) Our proposed EMAR achieves a much better results compared to state-of-the-art model EA-EMR.",
"It shows the effectiveness of our memory reconsolidation, and further indicates understanding relation prototypes is more important and reasonable than rote memorization of examples.",
"Memory size indicates the number of remembered examples for each task.",
"In this section, we investigate the effect of memory size for the performance of baselines and our proposed model.",
"We compare three memory sizes: 10 , 25 and 50 .",
"As existing work does not report the results with different memory size, we re-implement baseline models by ourselves in this experiment.",
"The results are shown in Table",
"2. We can find that: (1) With the increasing memory size, the performance of all models improves respectively, which shows that the memory size is one of the key factor determining the performance of continual relation learning models.",
"(2) On both FewRel and TACRED, our EMAR keeps performing the best under different memory sizes, and even achieves comparable results with other models of larger memory sizes.",
"It indicates adopting relation prototypes in EMAR is a more effective way to utilize memory compared with existing memory-based methods.",
"To show the effectiveness of prototypes and reconsolidation, we give a case study demonstrating the changing of feature spaces learnt by EA-EMR and EMAR (ours).",
"We sample two relations from the training set and 40 examples per relation from the test set.",
"Then we train EA-EMR and EMAR with the sampled training data respectively and visualize the changes of the sampled 40 instances in the feature spaces at different steps.",
"From Figure 4, we can see that EMAR learns better features of instances after multi-step training: the embedding space of EMAR is more sparse and features from two relations are more distinguishable.",
"On the other hand, the features learnt by EA-EMR become more dense with increasing steps, thus harder to classify.",
"This phenomenon is mainly due to the different approaches of constraining features used by EA-EMR and EMAR.",
"The L 2 regularization used in EA-EMR for keeping the instance distribution of old relations leads to higher density in the feature space and smaller distances between different relations after several training steps.",
"On the contrary, EMAR avoids models from forgetting previous relations by relation prototypes.",
"Compared with EA-EMR, using prototypes for reconsolidation is a more flexible constraint, allowing EMAR to utilize larger feature spaces for representing examples and prototypes.",
"To quantitatively analyze the case, we use the support vector machine to acquire linear boundaries for each image in Figure 4 and list the classification results in Table",
"3. The quantitative results in the table show that embeddings learnt by EMAR achieve better classification performance, which further supports our above observations.",
"To alleviate catastrophically forgetting old relations in continual relation learning, we introduce episodic memory activation and reconsolidation (EMAR), inspired by the mechanism in human long-term memory formation.",
"Compared with existing memory-based methods, EMAR requires models to understand the prototypes of old relations rather than to overfit a few specific memorized examples, which can keep better distinction among relations after long-term training.",
"We conduct experiments on three benchmarks in relation extraction and carry out extensive experimental results as well as empirical analyses, showing the effectiveness of EMAR on utilizing memorized examples.",
"For future work, how to combine open relation learning and continual relation learning together to complete the pipeline for emerging relations still remains a problem, and we will continue to work on it.",
"This work is supported by the National Key Research and Development Program of China (No. 2018YFB1004503) and the National Natural Science Foundation of China (NSFC No. 61732008, 61772302).",
"Tianyu Gao is supported by 2019 Tencent Rhino-Bird Elite Training Program and Tsinghua University Initiative Scientific Research Program."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"method",
"other",
"other"
] |
[
"Question Answering (QA) has shown great success thanks to the availability of large-scale datasets and the effectiveness of neural models.",
"Recent research works have attempted to extend these successes to the settings with few or no labeled data available.",
"In this work, we introduce two approaches to improve unsupervised QA.",
"First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as REFQA).",
"Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over REFQA.",
"We conduct experiments 1 on SQuAD 1.1, and NewsQA by fine-tuning BERT without access to manually annotated data.",
"Our approach outperforms previous unsupervised approaches by a large margin and is competitive with early supervised models.",
"We also show the effectiveness of our approach in the few-shot learning setting.",
"Extractive question answering aims to extract a span from the given document to answer the question.",
"Rapid progress has been made because of the release of large-scale annotated datasets (Ra-jpurkar et al., 2016, 2018; Joshi et al., 2017), and well-designed neural models (Wang and Jiang, 2016; Seo et al., 2016; Yu et al., 2018).",
"Recently, unsupervised pre-training of language models on large corpora, such as BERT (Devlin et al., 2019), has brought further performance gains.",
"However, the above approaches heavily rely on the availability of large-scale datasets.",
"The collection of high-quality training data is time-consuming and requires significant resources, esContribution during internship at Microsoft Research.",
"pecially for new domains or languages.",
"In order to tackle the setting in which no training data available, Lewis et al. (2019) leverage unsupervised machine translation to generate synthetic context-question-answer triples.",
"The paragraphs are sampled from Wikipedia.",
"NER and noun chunkers are employed to identify answer candidates.",
"Cloze questions are first extracted from the sentences of the paragraph, and then translated into natural questions.",
"However, there are a lot of lexical overlaps between the generated questions and the paragraph.",
"Similar lexical and syntactic structures render the QA model tend to predict the answer just by word matching.",
"Moreover, the answer category is limited to the named entity or noun phrase, which restricts the coverage of the learnt model.",
"In this work, we present two approaches to improve the quality of synthetic context-question-answer triples.",
"First, we introduce the REFQA dataset, which harvests lexically and syntactically divergent questions from Wikipedia by using the cited documents.",
"As shown in Figure 1, the sentence (statement) in Wikipedia and its cited documents are semantically consistent, but written with different expressions.",
"More informative context-question-answer triples can be created by using the cited document as the context paragraph and extracting questions from the statement in Wikipedia.",
"Second, we propose to iteratively refine data over REFQA.",
"Given a QA model and some REFQA examples, we first filter its predicted answers with a probability threshold.",
"Then we refine questions based on the predicted answers, and obtain the refined question-answer pairs to continue the model training.",
"Thanks to the pretrained linguistic knowledge in the BERT-based QA model, there are more appropriate and diverse answer candidates in the filtered predictions, some of which do not appear in the candidates extracted by NER tools.",
"We also show that iteratively refining the data further improves model performance.",
"We conduct experiments on SQuAD 1.1 (Ra-jpurkar et al., 2016), and NewsQA (Trischler et al., 2017).",
"Our method yields state-of-the-art results against strong baselines in the unsupervised setting.",
"Specifically, the proposed model achieves 71.4 F1 on the SQuAD 1.1 test set and 45.1 F1 on the NewsQA test set without using annotated data.",
"We also evaluate our method in a few-shot learning setting.",
"Our approach achieves 79.4 F1 on the SQuAD 1.1 dev set with only 100 labeled examples, compared to 63.0 F1 using the method of Lewis et al. (2019).",
"To summarize, the contributions of this paper include:",
"i) REFQA constructing in an unsupervised manner, which contains more informative context-question-answer triples.",
"ii) Using the QA model to iteratively refine and augment the question-answer pairs in REFQA.",
"Extractive Question Answering Given a document and question, the task is to predict a continuous sub-span of the document to answer the question.",
"Extractive question answering has garnered a lot of attention over the past few years.",
"Benchmark datasets, such as SQuAD (Rajpurkar et al., 2016, 2018), NewsQA (Trischler et al., 2017) and TriviaQA (Joshi et al., 2017), play an important role in the progress.",
"In order to improve the performance on these benchmarks, several models have been proposed, including BiDAF (Seo et al., 2016), R-NET (Wang et al., 2017), and QANet (Yu et al., 2018).",
"Recently, unsupervised pre-training of language models such as BERT (Devlin et al., 2019), achieves significant improvement.",
"However, these powerful models rely on the availability of human-labeled data.",
"Large annotated corpora for a specific domain or language are limited and expensive to construct.",
"Semi-Supervised QA Several semi-supervised approaches have been proposed to utilize unlabeled data.",
"Neural question generation (QG) models are used to generate questions from unlabeled passages for training QA models (Yang et al., 2017; Zhu et al., 2019b; Alberti et al., 2019; Dong et al., 2019).",
"However, the methods require labeled data to train the sequence-to-sequence QG model.",
"Dhingra et al. (2018) propose to collect synthetic context-question-answer triples by generating cloze-style questions from the Wikipedia summary paragraphs in an unsupervised manner.",
"Unsupervised QA Lewis et al. (2019) have explored the unsupervised method for QA.",
"They create synthetic QA data in four steps.",
"i) Sample paragraphs from the English Wikipedia.",
"ii) Use NER or noun chunkers to extract answer candidates from the context.",
"iii) Extract fill-in-the-blank cloze-style questions given the candidate answer and context.",
"iv) Translate cloze-style questions into natural questions by an unsupervised translator.",
"Compared with Dhingra et al. (2018), Lewis et al. (2019) attempt to generate natural questions by training an unsupervised neural machine translation (NMT) model.",
"They train the NMT model on non-aligned corpora of natural questions and cloze questions.",
"The unsupervised QA model of Lewis et al. (2019) achieves promising results, even outperforms early supervised models.",
"However, their questions are generated from the sentences or sub-clauses of the same paragraphs, which may lead to a biased learning of word matching since its similar lexicons and syntactic structures.",
"Besides, the category of answer candidates is limited to named entity or noun phrase, which restricts the coverage of the learnt QA model.",
"In this section, we introduce REFQA, a question answering dataset constructed in an unsupervised manner.",
"One drawback of Lewis et al. (2019) is that questions are produced from the paragraph sentence that contains the answer candidate.",
"So there are considerable expression overlaps between generated questions and context paragraphs.",
"In contrast, we harvest informative questions by taking advantage of Wikipedia's reference links, where lexical and syntactic differences exist between the article and its cited documents.",
"As shown in Figure 1, given statements in Wikipedia paragraphs and its cited documents, we use the cited documents as the context paragraphs and generate questions from the sub-clauses of statements.",
"In order to generate question-answer pairs, we first find answer candidates that appear in both sub-clauses and context paragraphs.",
"Next, we convert sub-clauses into the cloze questions based on the candidate answers.",
"We then conduct cloze-to-natural-question translation by a depen-0 10 20 30 40 50 60 70 80 90 100 10 100 1000 10000 100000 F 1 S c o r e Number of Labeled Training Data BERT-Large BERT-Large + Lewis et al.",
"dency tree reconstruction algorithm.",
"We describe the details as follows.",
"Statements in Wikipedia and its cited documents often have similar content, but are written with different expressions.",
"Informative questions can be obtained by taking the cited document as the context paragraph, and generate questions from the statement.",
"We crawl statements with reference links from the English Wikipedia.",
"The cited documents are obtained by parsing the contents of reference webpages.",
"Given a statement and its cited document, we restrict the statement to its sub-clauses, and extract answer candidates (i.e., named entities) that appear in both of them by using a NER toolkit.",
"We then find the answer span positions in the context paragraph.",
"If the candidate answer appears multiple times in the context, we select the position whose surrounding context has the most overlap with the statement.",
"We first generate cloze questions (Lewis et al., 2019) from the sub-clauses of Wikipedia statements.",
"Then we introduce a rule-based method to rewrite them to more natural questions, which utilizes the dependency structures.",
"Cloze questions are the statements with the answer replaced to a mask token.",
"Following Lewis et al. (2019), we replace answers in statements with a special mask token, which depends on its answer category 2 .",
"Using the statement and the answer (with a type label PRODUCT ) from Figure 1, this leaves us with the cloze question Guillermo crashed a Matt Damon interview, about his upcoming movie [THING] .",
"We perform a dependency reconstruction to generate natural questions.",
"We move answer-related words in the dependency tree to the front of the question, since answer-related words are important.",
"The intuition is that natural questions usually start with question words and question focus (Yao and Van Durme, 2014).",
"As shown in Figure 2, we apply the dependency parsing to the cloze questions, and translate them to natural questions by three steps:",
"i) We keep the right child nodes of the answer and prune its lefts.",
"ii) For each node in the parsing tree, if the subtree of its child node contains the answer node, we move the child node to the first child node.",
"iii) Finally, we obtain the natural question by inorder traversal on the reconstructed tree.",
"We apply the same rule-based mapping as Lewis et al. (2019), which replaces each answer category with the most appropriate wh* word.",
"For example, the THING category is mapped to What .",
"2 We obtain the answer type labels by a NER toolkit, and group these labels to high-level answer categories, which are used as our mask tokens, e.g., PRODUCT corresponding to THING , LOC corresponding to PLACE .",
"In this section, we propose to iteratively refine data over REFQA based on the QA model.",
"As shown in Figure 3, we use the QA model to filter REFQA data, find appropriate and diverse answer candidates, and use these answers to refine and augment REFQA examples.",
"Filtering data can get rid of some noisy examples in REFQA, and pretrained linguistic knowledge in the BERT-based QA model finds more appropriate and diverse answers.",
"We produce questions for the refined answers, then continue to train the QA model on the refined and filtered triples.",
"The first step of iterative data refinement is to train an initial QA model.",
"We use the REFQA examples SI = { ( c i , q i , a i ) } Ni =1 to train a BERT-based QA model P ( a | c, q ) by maximizing: (cid:88) SI log P ( a i | c i , q i ) (1) where the triple consists of context c i , question q i , and answer a i .",
"As shown in Figure 3, the QA model P ( a | c, q ) is used to refine the REFQA examples.",
"We first conduct inference on the unseen data (denoted as SU ), and obtain the predicted answers and their probabilities.",
"For each predicted answer a (cid:48) i , if it agrees with the gold answer a i , we keep the original question.",
"For the case that a (cid:48) i (cid:54) = a i , we treat a (cid:48) i as our new answer candidate.",
"Besides, we use the question generator (Section 3.2) to refine the original question q i to q (cid:48) i .",
"In this step, using the QA model for filtering helps us get rid of some noisy examples.",
"The refined question-answer pairs ( q (cid:48) i , a (cid:48) i ) can also augment the REFQA examples.",
"The pretrained linguistic knowledge in the BERT-based QA model is supposed to find more novel answers, i.e., some candidate answers are not extracted by the NER toolkit.",
"With the refined answer spans, we then use the question generator to produce their corresponding questions.",
"After refining the dataset, we concatenate them with the filtered examples whose candidate answers agree with the predictions.",
"The new training set is then used to continue to train the QA model.",
"The training objective is defined as: max (cid:88) a (cid:48) i ZA [ I ( a (cid:48) i = a i ) log P ( a i | c i , q i ) + I ( a (cid:48) i (cid:54) = a i ) log P ( a (cid:48) i | c i , q (cid:48) i )] , (2) Algorithm 1: Iterative Data Refinement Input: synthetic context-question-answer triples S = { ( c i , q i , a i ) } Ni =1 , a threshold and a decay factor .",
"where I ( ) is an indicator function (i.e., 1 if the condition is true).",
"Using the resulting QA model, we further refine question-answer pairs and repeat the training procedure.",
"The process is repeated until the performance plateaus, or no new data available.",
"Besides, in order to obtain more diverse answers during iterative training, we apply a decay factor for the threshold .",
"The pseudo code of iterative data refinement is presented in Algorithm 1.",
"We evaluate our proposed method on two widely used extractive QA datasets (Rajpurkar et al., 2016; Trischler et al., 2017).",
"We also demonstrate the effectiveness of our approach in the few-shot learning setting.",
"REFQA Construction We collect the statements with references from English Wikipedia following the procedure in (Zhu et al., 2019a).",
"We only consider the references that are HTML pages, which results in 1.4M statement-document pairs.",
"In order to make sure the statement is relevant to the cited document, we tokenize the text, remove stop words and discard the examples if more than half of the statement tokens are not in the cited document.",
"The article length is limited to 1,000 words for cited documents.",
"Besides, we compute ROUGE-2 (Lin, 2004) as correlation scores between statements and context.",
"We use the score's median ( 0 . 2013 ) as a threshold, i.e., half of the data with lower scores are discarded.",
"We obtain 303K remaining data to construct our REFQA.",
"We extract named entities as our answer candidates, using the NER toolkit of Spacy.",
"We split the statements into sub-clauses with Berkeley Neural Parser (Kitaev and Klein, 2018).",
"The questions are generated as in Section 3.2.",
"We also discard sub-clauses that are less than 6 tokens, to prevent losing too much information of original sentences.",
"Finally, we obtain 0.9M REFQA examples.",
"Question Answering Model We adopt BERT as the backbone of our QA model.",
"Following (De-vlin et al., 2019), we represent the question and passage as a single packed sequence.",
"We apply a linear layer to compute the probability of each token being the start or end of an answer span.",
"We use Adam (Kingma and Ba, 2015) as our optimizer with a learning rate of 3e-5 and a batch size of 24.",
"The max sequence length is set to 384.",
"We split the long document into multiple windows with a stride of 128.",
"We use the uncased version of BERT-Large (Whole Word Masking).",
"We evaluate on the dev set every 1000 training steps, and conduct early stopping when the performance plateaus.",
"Iterative Data Refinement We uniformly sample 300k data from REFQA to train the initial QA model.",
"We split the remaining 600k data into 6 parts for iterative data refinement.",
"For each part, we use the current QA model to refine question-answer pairs.",
"We combine the refined data with filtered data in a 1:1 ratio to continue training the QA model.",
"Specially, we keep the original answer if its prediction is a part of the original answer during inference.",
"The threshold is set to 0.15 for filtering the model predictions.",
"The decay factor is set to 0.9.",
"We conduct evaluation on the SQuAD 1.1 (Ra-jpurkar et al., 2016), and the NewsQA (Trischler et al., 2017) datasets.",
"We compare our proposed approach with previous unsupervised approaches and several supervised models.",
"Performance is measured via the standard Exact Match (EM) and F1 metrics.",
"Dhingra et al. (2018) propose to train the QA model on the cloze-style questions.",
"Here we take the unsupervised results that re-implemented by Lewis et al. (2019) with BERT-Large.",
"The other unsupervised QA system (Lewis et al., 2019) borrows the idea of unsupervised machine translation (Lample et al., 2017) to convert cloze questions into natural questions.",
"For a fair comparison, we use their published data 3 to re-implement their approach based on BERT-Large (Whole Word Masking) model.",
"Table 1 shows the main results on SQuAD 1.1 and NewsQA.",
"Training QA model on our REFQA outperforms the previous methods by a large margin.",
"Combining with iterative data refinement, our approach achieves new state-of-the-art results in the unsupervised setting.",
"Our QA model attains 71.4 F1 on the SQuAD 1.1 test set and 45.1 F1 on the NewsQA test set without using their annotated data, outperforming all of the previous unsupervised methods.",
"In particular, the results are competitive with early supervised models.",
"We conduct ablation studies on the SQuAD 1.1 dev set, in order to better understand the contributions of different components in our method.",
"We conduct experiments on REFQA and another synthetic dataset (named as WIKI ).",
"The WIKI dataset is constructed using the same method as in Lewis et al. (2019), which uses Wikipedia pages as context paragraphs for QA examples.",
"In addition to the dependency reconstruction method (Section 3.2.2), we compare three cloze translation methods proposed in Lewis et al. (2019).",
"Noise Cloze first applies a noise model, such as permutation, and word drop, as in Lample et al. (2017), and then applies the Identity Mapping translation.",
"UNMT converts cloze questions into natural questions following unsupervised neural machine translation.",
"Here we directly use the published model of Lewis et al. (2019) for evaluation.",
"For a fair comparison, we sample 300k training data for each dataset, and fine-tune BERT-Base for 2 epochs.",
"As shown in Table 2, training on our REFQA achieves a consistent gain over all cloze translation methods.",
"Moreover, our dependency reconstruction method is also favorable compared with the Identity Mapping method.",
"The improvement of DRC on WIKI is smaller than on REFQA.",
"We argue that it is because WIKI contains too many lexical overlaps, while DRC mainly focuses on providing structural diversity.",
"We present the generated questions of our method (DRC) and UNMT in Table",
"3. Most natural questions follow a similar structure: question word (what/who/how), question focus (name/-money/time), question verb (is/play/take) and topic (Yao and Van Durme, 2014).",
"Compared with UNMT, our method adjusts answer-related words in the dependency tree according to the linguistic characteristics of natural questions.",
"We validate the effectiveness of combining refined and filtered data for our data refinement.",
"We use only refined or filtered data to train our QA model, comparing with the combining approach.",
"The results are shown in Table",
"4. We observe 0.0 0.1 0.15 0.2 0.3 0.5 0.7 EM 54.3 61.2 61.8 61.1 59.7 59.2 58.5 F1 69.6 70.4 71.0 70.9 69.4 68.7 67.7 Table 5: Results of using different confidence thresholds during the construction of the refined data and filtered data.",
"that both data can help the QA model to achieve better performance.",
"Moreover, the combination of refined and filtered data is more useful than only using one of them.",
"Using iterative training, our combination approach further improves the model performance to 72.6 F1 (1.6 absolute improve-ment).",
"Besides, using our refined data contributes further improvement compared with filtered data.",
"We also analyze the effects of threshold on refined data and filtered data.",
"As shown in Figure 4, for the filtered data, using a higher confidence threshold achieves better performance, suggesting that using the QA model for filtering makes our examples more credible.",
"For the refined data and the combination, we observe that the threshold 0.15 achieves a better performance than the threshold 0.3, but the EM is greatly reduced when the threshold is set to 0.0.",
"Besides, there are 26,257 answers that do not appear in named entities using the threshold 0.15, compared to 15,004 for the threshold 0.3.",
"Thus, an appropriate threshold can help us improve the answer diversity and get rid of some noisy examples.",
"5.3.3 Effects of Confidence Threshold We experiment with several thresholds (0.0, 0.1, 0.15, 0.2, 0.3, 0.5 and 0.7) to filter the predicted answers.",
"Their QA results on SQuAD 1.1 dev set are presented in Table",
"5. Using threshold of 0.15 achieves better performance.",
"For brevity, we denote the original answer and predicted answer by OA and PA, respectively.",
"In order to analyze the contribution of our refined data, we categorize the data refinements into the following three types: OA PA The original answer contains the predicted answer.",
"OA PA The predicted answer contains the original answer.",
"Others The remaining data except for the above two types of refinement.",
"For each type, we keep the original data or use refined data to train our QA model.",
"We conduct experiments on the non-iterative setting with the data combination.",
"As shown in Table 6, our refined data improves the QA model in most types of refinement except OA PA.",
"The results indicate that the QA model favors longer phrases as answer spans.",
"Moreover, for the OA PA and Others types, there are 47.8% answers that are not extracted by the NER toolkit.",
"The iterative refinement extends the category of answer candidates, which in turn produces novel question-answer pairs.",
"We show a few examples of our generated data in Table 7.",
"We list one example for each type.",
"For the OA PA refinement, the predicted answer is a sub-span of the extracted named entity, but the complete named entity is more appropriate as an answer.",
"For the OA PA refinement, the QA model can help us extend the original answer to be a longer span, which is more complete and appropriate.",
"Besides, for the Others refinement, its prediction can be a new answer, and not appear in named entities extracted by the NER toolkit.",
"From Wikipedia, the free encyclopedia In August 2013, Guillermo crashed a Matt Damon interview, about his upcoming movie Elysium, by promoting his own movie called \"Estupido\", about a stupid man, which poster had an arrow pointing towards Matt Damon.",
"[17] At the end of the interview, Matt removed the poster, revealing on the other side the name of another Guillermo movie called \"Ass Face, also with an arrow pointing towards Matt. Matt accuses Guillermo of acting on Kimmel's orders and, facing the camera, starts to say \"you...\", During Kimmel's 2016 post-Oscar special, Ben Affleck wore a very large coat for his appearance, and Damon emerged from the coat for the interview.",
"Guillermo crashed a Matt Damon interview, about his upcoming movie Elysium Guillermo crashed a Matt Damon interview, about his upcoming movie [THING] Elysium extract sub-clause extract answer replace answer What his upcoming movie about Guillermo crashed a Matt Damon interview find answer position In the clip, Kimmel interrupted an interview Damon is giving while sitting in front of a poster for Elysium, Statement In the clip, Kimmel sidekick/parking lot security guard Guillermo Rodriguez interrupted an interview Damon is giving while sitting in front of a poster for Elysium, and propped up his own movie poster for a film called Estup-ido.",
"Cited Document Context Natural Question Answer Cloze Question translate QA model New Training Data Predictions { } Filtered (, , ) Refined (, , ) = ?",
"5.4 Few-Shot Learning Following the evaluation of (Yang et al., 2017; Dhingra et al., 2018), we conduct experiments in a few-shot learning setting.",
"We use the best configuration of our approach to train the unsupervised QA model based on BERT-Large (Whole Word Masking).",
"Then we fine-tune the model with limited SQuAD training examples.",
"However, he was removed from the studio by an enraged Kimmel, who then moved on to interview Affleck.",
"Later, Damon appeared in a sketch about the movie that Affleck stars in, Batman v Superman: Dawn of Justice, reprising his role as astronaut Mark Watney.[19]",
"The sign was bright yellow, with the title in big bold letters and an arrow pointed down at Damon.",
"As shown in Figure 5, our method obtains the best performance in the restricted setting, compared with the previous state of the art (Lewis et al., 2019) and directly fine-tuning BERT.",
"Moreover, our approach achieves 79 .",
"4 F1 (16.4 absolute gains than other models) with only 100 labeled examples.",
"The results illustrate that our method can greatly reduce the demand of in-domain annotated data.",
"In addition, we observe that the results of different methods become comparable when the labeled data size is greater than 10,000.",
"In this paper, we present two approaches to improve the quality of synthetic QA data for unsupervised question answering.",
"We first use the Wikipedia paragraphs and its references to construct a synthetic QA data REFQA and then use the QA model to iteratively refine data over REFQA.",
"Our method outperforms the previous unsupervised state-of-the-art models on SQuAD 1.1, and NewsQA, and achieves the best performance in the few-shot learning setting.",
"The work was partially supported by National Natural Science Foundation of China (NSFC) [Grant No. 61421003].",
"References Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins.",
"2019.",
"Synthetic QA corpora generation with roundtrip consistency.",
"In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 6168 6173, Florence, Italy.",
"Association for Computational Linguistics.",
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.",
"2019.",
"BERT: Pre-training of deep bidirectional transformers for language understanding.",
"In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 41714186, Minneapolis, Minnesota.",
"Association for Computational Linguistics.",
"Bhuwan Dhingra, Danish Danish, and Dheeraj Ra-jagopal.",
"2018.",
"Simple and effective semi-supervised question answering.",
"In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) , pages 582587, New Orleans, Louisiana.",
"Association for Computational Linguistics.",
"Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi-aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019.",
"Unified language model pre-training for natural language understanding and generation.",
"In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019) .",
"Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy.",
"2019.",
"SpanBERT: Improving pre-training by representing and predicting spans.",
"arXiv preprint arXiv:1907.10529 ."
] | [
"abstain",
"abstain",
"result",
"objective",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"result",
"method",
"result",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Video Question Answering is a task which requires an AI agent to answer questions grounded in video.",
"This task entails three key challenges: (1) understand the intention of various questions, (2) capturing various elements of the input video ( e.g., object, action, causality), and (3) cross-modal grounding between language and vision information.",
"We propose M otionA ppearance S ynergistic N etworks (MASN), which embed two cross-modal features grounded on motion and appearance information and selectively utilize them depending on the question's intentions.",
"MASN consists of a motion module, an appearance module, and a motion-appearance fusion module.",
"The motion module computes the action-oriented cross-modal joint representations, while the appearance module focuses on the appearance aspect of the input video.",
"Finally, the motion-appearance fusion module takes each output of the motion module and the appearance module as input, and performs question-guided fusion.",
"As a result, MASN achieves new state-of-the-art performance on the TGIF-QA and MSVD-QA datasets.",
"We also conduct qualitative analysis by visualizing the inference results of MASN.",
"The code is available at https://github.com/ ahjeongseo/MASN-pytorch .",
"Recently, research in natural language processing and computer vision has made significant progress in artificial intelligence (AI).",
"Thanks to this, vision-language tasks such as image captioning (Xu et al., 2015), visual question answering (VQA) (Antol et al., 2015; Goyal et al., 2017), and visual commonsense reasoning (VCR) (Zellers et al., 2019) have been introduced to the research community, Work done during an internship at AI Institute for Seoul National University (AIIS).",
"along with some benchmark datasets.",
"In particular, video question answering (video QA) tasks (Xu et al., 2016; Jang et al., 2017; Lei et al., 2018; Yu et al., 2019; Choi et al., 2020) have been proposed with the goal of reasoning over higher-level vision-language interactions.",
"In contrast to QA tasks based on static images, the questions presented in the video QA dataset vary from frame-level questions regarding the appearance of objects ( e.g., what is the color of the hat?) to questions regarding action and causality ( e.g., what does the man do after opening a door?).",
"There are three crucial challenges in video QA: (1) understand the intention of various questions, (2) capturing various elements of the input video ( e.g., object, action, and causality), and (3) cross-modal grounding between language and vision information.",
"To tackle these challenges, previous studies (Li et al., 2019; Jiang et al., 2020; Huang et al., 2020) have mainly explored this task by jointly embedding the features from the pre-trained word embedding model (Pennington et al., 2014) and the object detection models (He et al., 2016; Ren et al., 2016).",
"However, as discussed in (Gao et al., 2018), the use of the visual features extracted from the object detection models suffers from motion analysis since the object detection model lacks temporal modeling.",
"To enforce the motion analysis, a few approaches (Xu et al., 2017; Gao et al., 2018) have employed additional visual features (Tran et al., 2015) ( i.e., motion features) which were widely used in the action recognition domain, but their reasoning capability is still limited.",
"They typically employed recurrent models ( e.g., LSTM) to embed a long sequence of the visual features.",
"Due to the problem of long-term dependency in recurrent models (Bengio et al., 1993), their proposed methods may fail to learn dependencies between distant features.",
"In this paper, we propose Motion-Appearance Figure 1: An overview of MASN.",
"Synergistic Networks (MASN) for video question answering which consist of three kinds of modules: the motion module, the appearance module, and the motion-appearance fusion module.",
"As shown in Figure 1, the motion module and the appearance module aim to embed rich cross-modal representations.",
"These two modules have the same architecture except that the motion module takes the motion features extracted from I3D as visual features and the appearance module utilizes the appearance features extracted from ResNet.",
"Each of these modules first constructs the object graphs via graph convolutional networks (GCN) to compute the relationships among objects in each visual feature.",
"Then, the vision-question interaction module performs cross-modal grounding between the output of the GCNs and the question features.",
"The motion module and the appearance module each yield cross-modal representations of the motion and the appearance aspects of the input video respectively.",
"The motion-appearance fusion module finally integrates these two features based on the question features.",
"The main contributions of our paper are as follows.",
"First, we propose Motion-Appearance Synergistic Networks (MASN) for video question answering based on three modules, the motion module, the appearance module, and the motion-appearance fusion module.",
"Second, we validate MASN on the large-scale video question answering datasets TGIF-QA, MSVD-QA, and MSRVTT-QA.",
"MASN achieves the new state-of-the-art performance on TGIF-QA and MSVD-QA.",
"We perform ablation studies to validate the effectiveness of our proposed methods.",
"Finally, we conduct a qualitative analysis of MASN by visualizing inference results.",
"Visual Question Answering (VQA) is a task that requires both understanding questions and finding clues from visual information.",
"VQA can be clas-sified into two categories based on the type of the visual source: image QA and video QA.",
"In image QA, earlier works approach the task by applying attention between the question and the spatial dimensions of the image (Yang et al., 2016; Anderson et al., 2018; Kim et al., 2018a; Kang et al., 2019).",
"In video QA, since a video is represented as a sequence of images over time, recognizing the movement of objects or causality in the temporal dimension should also be considered along with the details from the spatial dimension (Jang et al., 2017; On et al., 2020).",
"There have been some attempts (Xu et al., 2017; Gao et al., 2018; Fan et al., 2019) to extract motion and appearance features and integrate them on a spatio-temporal dimension via memory networks.",
"Li et al. (2019), Huang et al. (2020), Jiang et al. (2020) proposed better performing models using attention in order to overcome the long-range dependency problem in memory networks.",
"However, they do not represent motion information sufficiently since they only use features pre-trained on image or object classification.",
"To better address this, we model spatio-temporal reasoning on multiple visual information ( i.e., ResNet, I3D) while also solving the long-range dependency problem that occurred in previous studies.",
"Action Classification is a task of recognizing actions, which are composed of interactions between actors and objects.",
"Therefore, this task has much in common with video QA, in that the model should perform spatio-temporal reasoning.",
"For better spatio-temporal reasoning, Tran et al. (2015) introduced C3D, which extends the 2D CNN filters to the temporal dimension.",
"Carreira and Zisserman (2017) proposed I3D, which integrates 3D convolutions into a state-of-the-art 2D CNN architecture, which now acts as a baseline in action classification tasks (Murray et al., 2012; Girdhar et al., 2018).",
"Feichtenhofer et al. (2019) introduced SlowFast, a network which encodes images in two streams with different frame rates and temporal resolutions of convolution.",
"This study based on a two-stream architecture inspired us in terms of assigning different inputs to each encoder module.",
"However, our method differs from the former studies in two aspects: (1) we utilize language features as well as vision features, and (2) we expand the two-stream structure to solve more than motion-oriented tasks.",
"Attention Mechanism explicitly calculates the correlation between two features (Bahdanau et al., 2015; Lin et al., 2017), and has been widely used in a variety of fields.",
"For machine translation, the Transformer architecture first introduced by Vaswani et al. (2017), utilizes multi-head self-attention that captures diverse aspects in the input features (Voita et al., 2019).",
"For video QA, Kim et al. (2018b); Li et al. (2019) use self and guided-attention to encode temporal dynamics in video and ground them in the question.",
"For multi-modal alignment, Tsai et al. (2019) apply the Transformer to merge cross-modal time series between vision, language, and audio features.",
"We utilize the attention mechanism to capture various relations between appearance and motion and to aggregate them.",
"In this section, we introduce a detailed description of our MASN network.",
"First, we explain how to obtain appearance and motion features in Section 3.1.",
"Then, we describe the Appearance and Motion modules, which encode visual features and combine them with the question in Section 3.2.",
"Finally, the Motion-Appearance Fusion module modulates the amount of motion and appearance information utilized and integrates them based on question context.",
"We first extract appearance and motion features from the video frames.",
"For the appearance representation, we use ResNet (He et al., 2016) pre-trained on an object and its attribute classification task as a feature extractor.",
"For the motion representation, we use I3D (Carreira and Zisserman, 2017) pre-trained on the action classification task.",
"We obtain local features representing object-level information without background noise and global features representing each frame's context for both appearance and motion features.",
"Appearance Representation.",
"For local features, given a video containing T frames, we obtain N objects from each frame using Faster R-CNN (Ren et al., 2016) that applies RoIAlign to extract the region of interest from ResNet's convolutional layer.",
"We denote the appearance-object set as R a = { o at,n , b t,n } t = T,n = N t =1 ,n =1 , where o , b indicate object feature and bounding box location, respectively.",
"Therefore, there are K = N T objects in a single video.",
"Following previous works, we extract the feature map from ResNet-152's Conv5 layer and apply a linear projection (Jiang et al., 2020; Huang et al., 2020).",
"We denote global features as v aglobal RT d , where d is the size of the hidden dimension.",
"Motion Representation.",
"We obtain a feature map from the last convolutional layer in I3D (Car-reira and Zisserman, 2017) whose dimension is (time, width, height, feature) = ( (cid:4) t 8 (cid:5) , 7, 7, 2048).",
"That is, each set of 8 frames is represented as a single feature map with dimension 7 7 2048 .",
"For local features, we apply RoIAlign (He et al., 2017) on the feature map using object bounding box location b .",
"We define the motion-object set as R m = { o mt,n , b t,n } t = T,n = N t =1 ,n =1 .",
"We apply average pooling in the feature map and linear projection to obtain global features v mglobal RT d .",
"Location Encoding.",
"To reason about relations between objects as in Section 3.2, it is required to consider each object's spatial and temporal location.",
"As appearance and motion features share identical operations until the Motion-Appearance Fusion module, we combine superscript a and m for simplicity.",
"Following L-GCN (Huang et al., 2020), we add a location encoding and define local features as: v a/mlocal = FFN([ o a/m ; d s ; d t ]) (1) where d s = FFN( b ) and d t is obtained by position encoding according to each frame's index.",
"Here o a/m denotes the object features mentioned above while FFN denotes a feed-forward network.",
"Analogous to local features, position encoding information d t is added to global features as well.",
"We then concatenate object features with global features to reflect the frame-level context in objects and obtain the visual representation v a/m RK d : v a/m = FFN([ v a/mlocal ; v a/mglobal ]) (2) Linguistic Representation.",
"We apply the pre-trained GloVe to convert each question word into a 300-dimensional vector, following previous work (Jang et al., 2017).",
"To represent contextual information in a sentence, we feed the word representations into a bidirectional LSTM (bi-LSTM).",
"Word-level features and last hidden units from the bi-LSTM are denoted by F q RL d , and q R d respectively.",
"L denotes the number of words in a question.",
"In this section, we explain the modules generating high-level visual representations and integrate them with linguistic representations.",
"Each module consists of (1) an Object Graph : spatio-temporal reasoning between object-level visual features, and (2) VQ interaction : calculating correlations between objects and words and obtaining cross-modal feature embeddings.",
"Since the modules share the same architecture, we describe each module's components only once with a shared superscript to avoid redundancy.",
"In this section, we define object graphs G = ( V a/m , E a/m ) to capture spatio-temporal relations between objects.",
"V , E denotes the node and edge set of the graph.",
"As equation 2 provides visual features v a/m , we define these as the graph input X a/m RK d .",
"We denote the graph as G a/m .",
"The nodes of graph G a/m are given by v a/m i X a/m , and edges are given by ( v a/mi , v a/mj ) , representing a relationship between the two nodes.",
"Given the constructed graph G , we perform graph convolution (Kipf and Welling, 2016) to obtain the relation-aware object features.",
"We obtain the similarity scores of nodes by calculating the dot-product after projecting input features to the interaction space and define the adjacency matrix A a/m RK K as follows: A a/m = softmax(( X a/m W 1 )( X a/m W 2 ) (cid:62) ) (3) We denote the two-layer graph convolution on input X with adjacency matrix A as: GCN( X ; A ) = ReLU( A ReLU( AXW 3 ) W 4 ) F = LayerNorm( X + GCN( X ; A )) (4) We omit superscripts in the graph convolution equation for simplicity.",
"We add a skip connection for residual learning between self-information X and smoothed-information with neighbor objects.",
"We compute both appearance-question and motion-question interaction to obtain correlations between language and each of the visual features.",
"As we encode visual feature F a/m and question feature F q in Equation 4 and Section 3.1, we calculate every pair of relations between two modalities using the bilinear operation introduced in BAN (Kim et al., 2018a) as follows: H i = 1 BAN i ( H i 1 , V ; A i ) (cid:62) + H i 1 (5) where H 0 = F q , 1 RL , 1 i g and A denotes the attention map.",
"F a/m is substituted for V respectively in our method.",
"In the equation above, calculating the result BAN( H, V ; A ) R d and adding it to the H is repeated in g times.",
"Afterwards, H represents the combined visual and language features in the question space incorporating diverse aspects from the two modalities (Yang et al., 2016).",
"In this section, we introduce the Motion-Appearance Fusion module which is our key contribution.",
"Depending on what the question ultimately asks about, the model is supposed to decide which features are more relevant among appearance and motion information, or a combination of both.",
"To do this, we produce appearance-centered, motion-centered, and all-mixed features and aggregate them depending on question context.",
"Based on the previous step, we obtain cross-modal combined Figure 2: Motion-Appearance Fusion module.",
"features H a and H m in terms of appearance and motion.",
"We concatenate these two matrices and define U as: U = (cid:20) H a H m (cid:21) , U R 2 L d (6) Motion-Appearance-centered Attention.",
"We first define regular scaled dot-product attention to attend features to diverse aspects: Attention( Q, K, V ) = softmax( QK (cid:62) d k ) V (7) where Q , K , V denotes the query, key, and value, respectively.",
"To obtain motion-centered, appearance-centered and mixed attention, we substitute U with the query, and H a , H m , U with the key and value in the equation 7 as: P a = Attention( U, H a , H a ) P m = Attention( U, H m , H m ) P all = Attention( U, U, U ) Z a/m/all = LayerNorm( P a/m/all + U ) (8) where P R 2 L d and Z R 2 L d .",
"As in the first line of the equation 8, we add projected appearance features P a on each appearance and motion feature to obtain Z a , since the matrix U is the concatenation of H a and H m .",
"Therefore, we argue that Z a contains appearance-centered information.",
"Similarly, Z m/all contains motion-centered and all-mixed features, respectively.",
"We argue that the Motion-Appearance-centered attention fuses appearance and motion features in various proportions and these three matrices work like multi-head attention sharing the task of capturing diverse information, and become synergistic when combined.",
"Question-Guided Fusion.",
"For question-guided fusion, we first define z a/m/all as the sum of matrix Z a/m/all R 2 L d over sequence length 2 L .",
"We obtain attention scores between each z a/m/all and question context vector q : a/m/all = softmax( q ( z a/m/all ) (cid:62) d z ) (9) where q denotes the last hidden vector.",
"The attention score a/m/all can be interpreted as the importance of each matrix Z based on question context.",
"We obtain the question-guided fusion matrix O as: S = a Z a + m Z m + all Z all O = LayerNorm( S + FFN( S )) (10) where O R 2 L d is obtained by linear transformation and a residual connection after weighted sum.",
"We aggregate information by attention over the sequence length of O : i = softmax(FFN( O i )) f = 2 L (cid:88) i =1 i O i (11) The final output vector f R d is used for answer prediction.",
"The video QA task can be divided into counting, open-ended word, and multiple-choice tasks (Jang et al., 2017).",
"Our method trains the model and predicts the answer based on the three tasks similar to previous work.",
"The counting task is formulated as a linear regression of the final output vector f .",
"We obtain the final answer by rounding the result and we minimize Mean Squared Error (MSE) loss.",
"The open-ended word task is essentially a classification task over the whole answer set.",
"We calculate a classification score by applying a linear classifier and softmax function on the final output f and train the model by minimizing cross-entropy loss.",
"For the multiple-choice task, like in previous work (Jang et al., 2017), we attach an answer to the question and obtain M candidates.",
"Then, we obtain the score for each of the M candidates by a linear transformation to the output vector f .",
"We minimize the hinge loss within every pair of candidates, max (0 , 1 + s n s p ) , where s n and s p are scores from incorrect and correct answers respectively.",
"In this section, we evaluate our proposed model on three Video QA datasets: TGIF-QA, MSVD-QA, and MSRVTT-QA.",
"We first introduce each dataset and compare our results with the state-of-the-art methods.",
"Then, we report ablation studies and include visualizations to show how each module in MASN works.",
"TGIF-QA (Jang et al., 2017) is a large-scale dataset that consists of 165K QA pairs collected from 72K animated GIFs.",
"The length of video clips is very short, in general.",
"TGIF-QA consists of four types of tasks: Count, Action, State transition (Trans.), and FrameQA.",
"Count is an open-ended question to count how many times an action repeats.",
"Action is a task to find action repeated at certain times, and Transition aims to identify a state transition over time.",
"Both types are multiple-choice tasks.",
"Lastly, FrameQA is an open-ended question that can be solved from just one frame, similar to image QA.",
"MSVD-QA & MSRVTT-QA (Xu et al., 2017) are automatically generated from video descriptions.",
"It consists of 1,970 video clips and 50K and 243K QA pairs, respectively.",
"The average video lengths are 10 seconds and 15 seconds respectively.",
"Questions belong to five types: what, who, how, when, and where.",
"The task is open-ended with a pre-defined answer sets of size 1,000 and 4,000, respectively.",
"We first extract frames with 6 fps for all datasets.",
"In the case of appearance features , we sample 1 frame out of 4 to avoid information redundancy.",
"We apply Faster R-CNN (Ren et al., 2016) pre-trained on Visual Genome (Krishna et al., 2017) to obtain local features.",
"The number of extracted objects is N = 10 .",
"For global features, we use ResNet-152 pre-trained on ImageNet (Deng et al., 2009).",
"In the the case of motion features , we apply I3D pre-trained on the Kinetics action recognition dataset (Kay et al., 2017).",
"For the input of I3D, we concatenate a set of 8 frames around the sampled frame mentioned above.",
"In terms of training details, we employ Adam optimizer with learning rate as 10 4 .",
"The number of BAN glimpse g is 4.",
"We set the batch size as 32 for the Count and FrameQA tasks and 16 for Action and Trans.",
"tasks.",
"TGIF-QA.",
"Compared with ST-VQA (Jang et al., 2017), Co-Mem (Gao et al., 2018), PSAC (Li et al., 2019), STA (Gao et al., 2019), HME (Fan et al., 2019), and recent SoTA models: HGA, L-GCN, QueST, HCRN (Jiang and Han, 2020; Huang et al., 2020; Jiang et al., 2020; Le et al., 2020), MASN shows the best results for three tasks: Count,",
"Trans., and Action, outperforming the baseline methods by a large margin as shown in Table 1.",
"In the case of FrameQA, the performance is similar to QueST.",
"However, considering that there exists some tradeoff between the performance of Count and FrameQA since Count focuses on identifying temporal information and FrameQA focuses on spatial information, MASN shows the best overall performance on the entire task.",
"MSVD-QA & MSRVTT-QA.",
"As shown in Table 2, MASN outperforms the best baseline methods, QuesT and HCRN by approximately 2% on MSVD-QA, and shows competitive results on MSRVTT-QA.",
"Since these datasets are composed of wh-questions, such as what or who, the question sets seemingly resemble FrameQA in TGIF-QA, as they tend to focus on spatial appearance features.",
"This means that MASN is able to capture spatial details well based on the spatiotemporally mixed features.",
"Analyzing the impact of motion module and appearance module.",
"We investigate the effect of each module as seen in Figure 1.",
"In Table 3, the 1 st and 2 nd row represent the result of using only the Appearance and Motion module, respectively.",
"The 3 rd row shows the result of just concatenating appearance and motion features from each module and flattening them, by substituting the input X for O in equation 11.",
"Most existing SOTA models utilize only ResNet features for spatio-temporal reasoning based on the difference of vectors over time.",
"Using only the Appearance module is similar to most of these existing methods, which can catch spatio-temporal relations relatively well.",
"On the other hand, we found that the accuracy on FrameQA when only using the Motion module is about 7% lower than when using the Appearance module.",
"This means the Motion module is limited in its ability to capture the appearance details.",
"However, comparing the 1 st and 3 rd row in Table 3, the performance in the Action and Trans.",
"tasks increase consistently when the Motion module is added compared to using only the Appearance module.",
"This indicates that the Motion module is a meaningful addition.",
"Lastly, compared to the 1 st , 2 nd and 3 rd row, when integrating the output from both modules there is a further overall performance improvement.",
"This indicates a synergistic effect occurs when integrating both the appearance and motion feature after obtaining them as high-level features.",
"Analyzing the impact of fusion module.",
"We show ablation studies inside the fusion module represented in Table 3.",
"The 4 th row indicates the performance of our proposed MASN architecture.",
"The results in the Single-Attention Fusion' row use only one type of attention among appearance-centered, motion-centered, and all-mixed as seen in equation",
"8. The results in the Dual-Attention Figure 3: Qualitative results on TGIF-QA dataset.",
"Due to the nature of video, when a question such as How many times does the man in the white shirt put his hand on the head? is given, the model is supposed to find the motion information put while catching the appearance information man in white shirt or hand on head, and finally mixing them in different proportions depending on the context of question.",
"Comparing the result of the 3 rd (without fusion) row and MASN first, MASN shows better performance across tasks.",
"This means mixing appearance and motion features in various proportions using the Motion-Appearance-centered Fusion module and computing the weighted fusion via the Question-Guided Fusion module contributes to the performance.",
"When comparing the general performance with the number of attention types in fusion module, using single, dual, and triple attention (MASN) shows increasingly better performance in the same order.",
"This indicates that focusing on different aspects and integrating each attended feature performs better than calculating attention at once.",
"Additionally, comparing the result of using only appearance or motion-centered attention in Single' with both of them in Dual', we find that using both features shows better performance, which means they play complementary roles for each other.",
"Similarly, we argue the reason for the performance increase in FrameQA in the Motion' row of Single-Att. Fusion' is due to the fact that the model can find relevant appearance information better based on motion information.",
"We give examples of each attention score matrix from Motion-Appearance Fusion module in Figure 3.",
"We draw two conclusions from the Figure: (1) each attention map catches different relations similarly to multi-head attention, (2) each attention map is used to a different extend depending on the type of task.",
"For example, in FrameQA, the appearance-centered's attention map captures which appearance trait to find focusing on how many'.",
"On the other hand, the motion-centered's and all-mixed's attention map attend on waving' or hands' to catch motion-related information.",
"In Action, similar to FrameQA, the appearance-centered's attention map attends on head' which is the object of action, while the motion-centered's attention map catch nod' which is related to movement.",
"However, in the case of the Count task, the two attention weights are not as sparse as scores in the other tasks.",
"We think this dense attention map causes the inconsistency in the performance increase between Count task and Action and Trans.",
"task, although questions for all of these three tasks ask for motion information.",
"In this paper, we proposed a Motion-Appearance Synergistic Networks to fuse and create a synergy between motion and appearance features.",
"Through the Motion and Appearance modules, MASN manages to find motion and appearance clues to solve the question, while modulating the amount of information used of each type through the Fusion module.",
"Experimental results on three benchmark datasets show the effectiveness of our proposed MASN architecture compared to other models.",
"Acknowledgement The authors would like to thank Ho-Joon Song, Yu-Jung Heo, Bjorn Bebensee, Seonil Son, Kyoung-Woon On, Seongho Choi and Woo-Suk Choi for helpful comments and editing.",
"This work was partly supported by the Institute of Information & Communications Technology Planning & Evaluation (2015-0-00310-SW.StarLab/25%, 2017-0-01772-VTT/25%, 2018-0-00622-RMI/25%, 2019-0-01371-BabyMind/25%) grant funded by the Korean government."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain"
] |
[
"Complex question answering over knowledge base (Complex KBQA) is challenging because it requires various compositional reasoning capabilities, such as multi-hop inference, attribute comparison, set operation.",
"Existing benchmarks have some shortcomings that limit the development of Complex KBQA: 1) they only provide QA pairs without explicit reasoning processes; 2) questions are poor in diversity or scale.",
"To this end, we introduce KQA Pro, a dataset for Complex KBQA including ~120K diverse natural language questions.",
"We introduce a compositional and interpretable programming language KoPL to represent the reasoning process of complex questions.",
"For each question, we provide the corresponding KoPL program and SPARQL query, so that KQA Pro serves for both KBQA and semantic parsing tasks.",
"Experimental results show that SOTA KBQA methods cannot achieve promising results on KQA Pro as on current datasets, which suggests that KQA Pro is challenging and Complex KBQA requires further research efforts.",
"We also treat KQA Pro as a diagnostic dataset for testing multiple reasoning skills, conduct a thorough evaluation of existing models and discuss further directions for Complex KBQA.",
"Our codes and datasets can be obtained from https://github.com/shijx12/ KQAPro_Baselines .",
"Thanks to the recent advances in deep models, especially large-scale unsupervised representation learning (Devlin et al., 2019), question answering of simple questions over knowledge base (Simple KBQA), i.e., single-relation factoid questions (Bor-des et al., 2015), begins to saturate (Petrochuk and Zettlemoyer, 2018; Wu et al., 2019; Huang",
"et al., 2019).",
"However, tackling complex questions (Complex KBQA) is still an ongoing challenge, due to the unsatisfied capability of compositional reasoning.",
"As shown in Table 1, to promote the community development, several benchmarks are proposed for Complex KBQA, including LC-QuAD2.0 (Dubey et al., 2019), ComplexWebQuestions (Talmor and Berant, 2018), MetaQA (Zhang et al., 2018), CSQA (Saha et al., 2018), CFQ (Key-sers et al., 2020), and so on.",
"However, they suffer from the following problems: 1) Most of them only provide QA pairs without explicit reasoning processes, making it challenging for models to learn compositional reasoning.",
"Some researchers try to learn the reasoning processes with reinforcement learning (Liang et al., 2017; Saha et al., 2019; Ansari et al., 2019) and searching (Guo et al., 2018).",
"However, the prohibitively huge search space hinders both the performance and speed, especially when the question complexity increases.",
"For example, Saha et al. (2019) achieved a 96.52% F1 score on simple questions in CSQA, whereas only 0.33% on complex questions that require comparative count.",
"We think that intermediate supervision is needed for learning the compositional reasoning, mimicking the learning process of human beings (Holt, 2017).",
"2) Questions are not satisfactory in diversity and scale.",
"For example, MetaQA (Zhang et al., 2018) questions are generated using just 36 templates, and they only consider relations between entities, ignoring literal attributes; LC-QuAD2.0 (Dubey et al., 2019) and ComplexWebQuestions (Talmor and Berant, 2018) have fluent and diverse human-written questions, but their scale is less than 40K.",
"To address these problems, we create KQA Pro , a large-scale benchmark for Complex KBQA.",
"In KQA Pro, we define a Knowledge-oriented Programming Language ( KoPL ) to explicitly describe 6101 Question 1: SPARQL: KoPL: When did Cleveland Cavaliers pick up LeBron James SELECT DISTINCT",
"the reasoning process for solving complex questions (see Fig. 1).",
"A program is composed of symbolic functions , which define the basic, atomic operations on KBs.",
"The composition of functions well captures the language compositionality (Ba-roni, 2019).",
"Besides KoPL, following previous works (Yih et al., 2016; Su et al., 2016), we also provide the corresponding SPARQL for each question, which solves a complex question by parsing it into a query graph.",
"Compared with SPARQL, KoPL 1) provides a more explicit reasoning process.",
"It divides the question into multiple steps, making human understanding easier and the intermediate results more transparent; 2) allows humans to control the model behavior better, potentially supporting human-in-the-loop.",
"When the system gives a wrong answer, users can quickly locate the error by checking the outputs of intermediate functions.",
"We believe the compositionality of KoPL and the graph structure of SPARQL are two complementary directions for Complex KBQA.",
"To ensure the diversity and scale of KQA Pro, we follow the synthesizing and paraphrasing pipeline in the literature (Wang et al., 2015a; Cao et al., 2020), first synthesize large-scale (canonical question, KoPL, SPARQL) triples, and then paraphrase the canonical questions to natural language questions (NLQs) via crowdsourcing.",
"We combine the following two factors to achieve diversity in questions: (1) To increase structural variety, we leverage a varied set of templates to cover all the possible queries through random sampling and recursive composing; (2) To increase linguistic variety, we filter the paraphases based on their edit distance with the canonical utterance.",
"Finally, KQA Pro consists of 117,970 diverse questions that involve varied reasoning skills ( e.g. , multi-hop reasoning, value comparisons, set operations, etc. ).",
"Besides a QA dataset, it also serves as a semantic parsing dataset.",
"To the best of our knowledge, KQA Pro is currently the largest corpus for NLQ-to-SPARQL and NLQ-to-Program tasks.",
"We reproduce the state-of-the-art KBQA models and thoroughly evaluate them on KQA Pro.",
"From the experimental results, we observe significant performance drops of these models compared with on existing KBQA benchmarks.",
"It indicates that Complex KBQA is still challenging, and KQA Pro could support further explorations.",
"We also treat KQA Pro as a diagnostic dataset for analyzing a model's capability of multiple reasoning skills, and discover weaknesses that are not widely known, e.g. , current models struggle on comparisonal reasoning for lacking of literal knowledge ( i.e. , (Le-Bron James, height, 206 centimetre) ), or perform poorly on questions whose answers are not obver-seved in the training set.",
"We hope all contents of KQA Pro could encourage the community to make further breakthroughs.",
"Complex KBQA aims at answering complex questions over KBs, which requires multiple reasoning capabilities such as multi-hop inference, quantitative comparison, and set operation (Lan et al., 2021).",
"Current methods for Complex KBQA can be grouped into two categories: 1) semantic parsing based methods (Liang et al., 2017; Guo et al., 2018; Saha et al., 2019; Ansari et al., 2019), which parses a question to a symbolic logic form ( e.g. , calculus (Artzi et al., 2013), -DCS (Liang, 2013; Pasupat and Liang, 2015; Wang et al., 2015b; Pasupat and Liang, 2016), SQL (Zhong et al., 2017), AMR (Banarescu et al., 2013), SPARQL (Sun et al., 2020), and etc.",
") and then executes it against the KB and obtains the final answers; 2) information retrieval based methods (Miller et al., 2016; Saxena 6102 Dataset multiple kinds of knowledge number of questions naturallanguage querygraphs multi-stepprograms WebQuestions (Berant et al., 2013) 5,810 WebQuestionSP (Yih et al., 2016) 4,737 GraphQuestions (Su et al., 2016) 5,166 LC-QuAD2.0 (Dubey et al., 2019) 30,000 ComplexWebQuestions (Talmor and Berant, 2018) 34,689 MetaQA (Zhang et al., 2018) 400,000 CSQA (Saha et al., 2018) 1.6M CFQ (Keysers et al., 2020) 239,357 GrailQA (Gu et al., 2021) 64.331 KQA Pro (ours) 117,970 Table 1: Comparison with existing datasets of Complex KBQA.",
"et al., 2020; Schlichtkrull et al., 2018; Zhang et al., 2018; Zhou et al., 2018; Qiu et al., 2020; Shi et al., 2021), which constructs a question-specific graph extracted from the KB and ranks all the entities in the extracted graph based on their relevance to the question.",
"Compared with information retrieval based methods, semantic parsing based methods provides better interpretability by generating expressive logic forms, which represents the intermediate reasoning process.",
"However, manually annotating logic forms is expensive and labor-intensive, and it is challenging to train a semantic parsing model with weak supervision signals ( i.e. , question-answer pairs).",
"Lacking logic form annotations turns out to be one of the main bottlenecks of semantic parsing.",
"Table 1 lists the widely-used datasets in Complex KBQA community and their features.",
"MetaQA and CSQA have a large number of questions, but they ignore literal knowledge, lack logic form annotations, and their questions are written by templates.",
"Query graphs ( e.g. , SPARQLs) are provided in some datasets to help solve complex questions.",
"However, SPARQL is weak in describing the intermediate procedure of the solution, and the scale of existing question-to-SPARQL datasets is small.",
"In this paper, we introduce a novel logic form KoPL, which models the procedure of Complex KBQA as a multi-step program, and provides a more explicit reasoning process compared with query graphs.",
"Furthermore, we propose KQA Pro, a large-scale semantic parsing dataset for Complex KBQA, which contains ~120k diverse natural language questions with both KoPL and SPARQL annotations.",
"It is the largest NLQ-to-SPARQL dataset as far as we know.",
"Compared with these existing datasets, KQA Pro serves as a more well-rounded benchmark.",
"Relation , the link between entities or concepts.",
"Entities are linked to concepts via the relation instance of .",
"Concepts are organized into a tree structure via relation subclass of .",
"Attribute , the literal information of an entity.",
"An attribute has a key and a value, which is one of four types 1 : string, number, date, and year.",
"The number value has an extra unit, e.g. , 206 centimetre.",
"LeBron James) .",
"Literal knowledge , the triple with form (entity, attribute key, attribute value), e.g. , (LeBron James, height, 206 centimetre) .",
"Qualifier knowledge , the triple whose head is a relational or literal triple, e.g. , ((LeBron James, drafted by, Cleveland Cavaliers), point in time, 2003) .",
"A qualifier also has a key and a value.",
"We design KoPL, a compositional and interpretable programming language to represent the reasoning process of complex questions.",
"It models the complex procedure of question answering with a program of intermediate steps.",
"Each step involves a function with a fixed number of arguments.",
"Every program can be denoted as a binary tree.",
"As shown in Fig. 1, a directed edge between two nodes represents the dependency relationship between two 1 Wikidata also has other types like geographical and time.",
"functions.",
"That is, the destination function takes the output of the source function as its argument.",
"The tree-structured program can also be serialized by post-order traversal, and formalized as a sequence with n functions.",
"The general form is shown below.",
"Each function f i takes in a list of textual arguments a i , which need to be inferred according to the question, and a list of functional arguments b i , which come from the output of previous functions.",
"Take function Relate as an example, it has two textual inputs: relation and direction (i.e., forward or backward , meaning the output is object or subject).",
"It has one functional input: a unique entity.",
"Its output is a set of entities that hold the specific relation with the input entity.",
"For example, in Question 2 of Fig. 1, the function Relate ([ father, forward ], [ LeBron James Jr. ]) returns LeBron James, the father of LeBron James Jr. (the direction is omitted in the figure for simplicity).",
"We analyze the generic, basic operations for Complex KBQA, and design 27 functions 2 in KoPL.",
"They support KB item manipulation ( e.g. , Find , Relate , FilterConcept , QueryRelationQualifier , etc. ), various reasoning skills ( e.g. , And , Or , etc. ), and multiple question types ( e.g. , QueryName , SelectBetween , etc. ).",
"By composing the finite functions into a KoPL program 3 , we can model the reasoning process of infinite complex questions.",
"Note that qualifiers play an essential role in disambiguating or restricting the validity of a fact (Galkin et al., 2020; Liu et al., 2021).",
"However, they have not been adequately modeled in current KBQA models or datasets.",
"As far as we know, we are the first to explicitly model qualifiers in Complex KBQA.",
"To build KQA Pro dataset, first, we extract a knowledge base with multiple kinds of knowledge (Section 4.1).",
"Then, we generate canonical questions, corresponding KoPL programs and SPARQL queries with novel compositional strategies (Sec-tion 4.2).",
"In this stage, we aim to cover all the possible queries through random sampling and recursive composing.",
"Finally, we rewrite canonical questions into natural language via crowdsourcing (Section 4.3).",
"To further increase linguistic variety, 2 The complete function instructions are in Appendix A. 3 The grammar rules of KoPL are in Appendix B. we reject the paraphrases whose edit distance with the canonical question is small.",
"We took the entities of FB15k-237 (Toutanova et al., 2015) as seeds, and aligned them with Wikidata via Freebase IDs 4 .",
"The reasons are as follows: 1) The vast amount of knowledge in the full knowledge base ( e.g. , full Freebase (Bollacker et al., 2008) or Wikidata contains millions of entities) may cause both time and space issues, while most of the entities may never be used in questions.",
"2) FB15k-237 is a high-quality, dense subset of Freebase, whose alignment to Wikidata produces a knowledge base with rich literal and qualifier knowledge.",
"We added 3,000 other entities with the same name as one of FB15k-237 entities to increase the disambiguation difficulty.",
"The statistics of our final knowledge base are listed in Table 2. # Con.",
"To generate diverse complex questions in a scalable manner, we propose to divide the generation into two stages: locating and asking .",
"In locating stage we describe a single entity or an entity set with various restrictions, while in asking stage we query specific information about the target entity or entity set.",
"We define several strategies for each stage.",
"By sampling from them and composing the two stages, we can generate large-scale and diverse questions with a small number of templates.",
"Fig. 2 gives an example of our generation process.",
"For locating stage, we propose 7 strategies and show part of them in the top section of Table 3. We can fill the placeholders of templates by sampling from KB to describe a target entity.",
"We support quantitative comparisons of 4 operations: equal, not equal, less than, and greater than, indicated by <OP> of the template.",
"We support optional qualifier restrictions, indicated by ( <QK> is <QV> ), 4 The detailed extracting process is in Appendix C. 6104 Strategy Template Example Locating Stage Entity Name LeBron James Concept + Literal the <C> whose <K> is <OP> <V> ( <QK> is <QV> ) the basketball team whose social media followers is greater than 3,000,000 (point in time is 2021) Concept + Relational the <C> that <P> <E> ( <QK> is <QV> ) the basketball player that was drafted by Cleveland Cavaliers Recursive Multi-Hop unfold <E> in a Concept + Relational description the basketball player that was drafted by the basketball team whose social media followers is greater than 3,000,000 (point in time is 2021) Intersection Condition 1 and Condition 2 the basketball players whose height is greater than 190 centimetres and less than 220 centimetres Asking Stage QueryName What/Who is <E> Who is the basketball player whose height is equal to 206 centimetres?",
"In Recursive Multi-Hop , we replace the entity of a relational condition with a more detailed description, so that we can easily increase the hop of questions.",
"For asking stage, we propose 9 strategies and show some of them in the bottom section of Table 3. Our SelectAmong is similar to argmax and argmin operations in -DCS.",
"The complete generation strategies are shown in Appendix D due to space limit.",
"Our generated instance consists of five elements: question, SPARQL query, KoPL program, 10 answer choices, and a golden answer.",
"Choices are selected by executing an abridged SPARQL 5 , which randomly drops one clause from the complete SPARQL.",
"With these choices, KQA Pro supports both multiple-choice setting and open-ended setting.",
"We randomly generate lots of questions, and only preserve those with a unique answer.",
"For example, since Akron has different populations in different years, we will drop questions like What is the population of Akron , unless the time constraint ( e.g. , in 2010 ) is specified.",
"After large-scale generation, we release the generated questions on Amazon Mechanical Turk (AMT)",
"and ask the workers to paraphrase them without changing the original meaning.",
"For the convenience of paraphrasing, we visualize the KoPL flowcharts like Fig. 1 to help workers understand complex questions.",
"We allow workers to mark a question as confusing if they cannot understand it or find logical errors.",
"These instances will be removed from our dataset.",
"After paraphrasing, we evaluate the quality by 5 other workers.",
"They are asked to check whether the paraphrase keeps the original meaning and give a fluency rating from 1 to 5.",
"We reject those paraphrases which fall into one of the following cases: (1) marked as different from the original canonical question by more than 2 workers; (2) whose average fluency rating is lower than 3; (3) having a very small edit distance with the canonical question.",
"Our KQA Pro dataset consists of 117,970 instances with 24,724 unique answers.",
"Fig.",
"3(a) shows the question type distribution of KQA Pro.",
"Within the 9 types, SelectAmong accounts for the least fraction (4.6%), while others account for more or less than 10%.",
"Fig.",
"3(b) shows that multi-hop questions cover 73.7% of KQA Pro, and 4.7% questions even require at least 5 hops.",
"We compare the question length distribution of different Complex KBQA 6105 Which team picked LeBron James?",
"3(c).",
"We observe that our KQA Pro has longer questions than others on average.",
"In KQA Pro, the average length of questions/program-s/SPARQLs is 14.95/4.79/35.52 respectively.",
"More analysis is included in Appendix G. 5 Experiments The primary goal of our experiments is to show the challenges of KQA Pro and promising Complex KBQA directions.",
"First, we compare the performance of state-of-the-art KBQA models on current datasets and KQA Pro, to show whether KQA Pro is challenging.",
"Then, we treat KQA Pro as a diagnostic dataset to investigate fine-grained reasoning abilities of models, discuss current weakness and promising directions.",
"We further conduct an experiment to explore the generation ability of our",
"proposed model.",
"Last, we provide a case study to show the interpretablity of KoPL.",
"Benchmark Settings .",
"We randomly split KQA Pro to train/valid/test set by 8/1/1, resulting in three sets with 94,376/11,797/11,797 instances.",
"About 30% answers of the test set are not seen in training.",
"Representative Models .",
"KBQA models typically follow a retrieve-and-rank paradigm, by constructing a question-specific graph extracted from the KB and ranks all the entities in the graph based on their relevance to the question (Miller et al., 2016; Saxena et al., 2020; Schlichtkrull et al., 2018; Zhang et al., 2018; Zhou et al., 2018; Qiu et al., 2020); or follow a parse-then-execute paradigm, by parsing a question to a query graph (Berant et al., 2013; Yih et al., 2015) or program (Liang et al., 2017; Guo et al., 2018; Saha et al., 2019; Ansari et al., 2019) through learning from question-answer pairs.",
"Experimenting with all methods is logistically challenging, so we reproduce a representative subset of mothods: KVMemNet (Miller et al., 2016), a well-known model which organizes the knowledge into a memory of key-value pairs, and iteratively reads memory to update its query vector.",
"EmbedKGQA (Saxena et al., 2020), a state-of-the art model on MetaQA, which incorporates knowledge embeddings to improve the reasoning performance.",
"SRN (Qiu et al., 2020), a typical path search model to start from a topic entity and predict a sequential relation path to find the target entity.",
"RGCN (Schlichtkrull et al., 2018), a variant 6106 of graph convolutional networks, tackling Complex KBQA through the natural graph structure of knowledge base.",
"Our models.",
"Since KQA Pro provides the annotations of SPARQL and KoPL, we directly learn our parsers using supervised learning by regarding the semantic parsing as a sequence-to-sequence task.",
"We explore the widely-used sequence-to-sequence model RNN with attention mechanism (Dong and Lapata, 2016), and the pretrained generative language model BART (Lewis et al., 2020), as our SPARQL and KoPL parsers.",
"For KoPL learning, we design a serializer to translate the tree-structured KoPL to a sequence.",
"For example, the KoPL program in Fig. 2 is serialized as: Find arg LeBron James func Relate arg drafted by arg backward func FilterConcept arg team func QueryName .",
"Here, arg and func are special tokens we designed to indicate the structure of KoPL.",
"To compare machine with Human , we sample 200 instances from the test set, and ask experts to answer them by searching our knowledge base.",
"Implementation Details .",
"For our BART model, we used the bart-base model of HuggingFace 6 .",
"We used the optimizer Adam (Kingma and Ba, 2015) for all models.",
"We searched the learning rate for BART paramters in {1e-4, 3e-5, 1e-5}, the learning rate for other parameters in {1e-3, 1e-4, 1e-5}, and the weight decay in {1e-4, 1e-5, 1e-6}.",
"According to the performance on validation set, we finally used learning rate 3e-5 for BART parameters, 1e-3 for other parameters, and weight decay 1e-5.",
"We compare the performance of KBQA models on KQA Pro with MetaQA and WebQSP (short for WebQuestionSP), two commonly used benchmarks in Complex KBQA.",
"The experimental results are in Table 4, from which we observe that: Although the models perform well on MetaQA and WebQSP, their performances are significantly lower and not satisfying on KQA Pro.",
"It indicates that our KQA Pro is challenging and the Complex KBQA still needs more research efforts.",
"Actually, 1) Both MetaQA and WebQSP mainly focus on relational knowledge, i.e. , multi-hop questions.",
"Therefore, previous models on these datasets are designed to handle only entities and relations.",
"In comparison, KQA Pro includes three types of 6 https://github.com/huggingface/transformers knowledge, i.e. , relations, attributes, and qualifiers, thus is much more challenging.",
"2) Compared with MetaQA which contains template questions, KQA Pro contains diverse natural language questions and can evaluate models' language understanding abilities.",
"3) Compared with WebQSP which contains 4,737 fluent and natural questions, KQA Pro covers more question types ( e.g. , verification, counting) and reasoning operations ( e.g. , intersect, union).",
"KQA Pro can serve as a diagnostic dataset for in-depth analyses of reasoning abilities ( e.g. , counting, comparision, logical reasoning, etc. ) for Complex KBQA, since KoPL programs underlying the questions provide tight control over the dataset.",
"We categorize the test questions to measure fine-grained ability of models.",
"Specifically, Multi-hop means multi-hop questions, Qualifier means questions containing qualifier knowledge, Comparison means quantitative or temporal comparison between two or more entities, Logical means logical union or intersection, Count means questions that ask the number of target entities, Verify means questions that take yes or no as the answer, Zero-shot means questions whose answer is not seen in the training set.",
"The results are shown in Table 5, from which we have the following observations: (1) Benefits of intermediate reasoning supervision.",
"Our RNN and BART models outperform current models significantly on all reasoning skills.",
"This is because KoPL program and SPARQL query provide intermediate supervision which benefits the learning process a lot.",
"As (Dua et al., 2020) suggests, future dataset collection efforts should set aside a fraction of budget for intermediate annotations, particularly as the reasoning required 6107 Model Overall Multi-hop Qualifier Compari-son Logical Count Verify Zero-shot KVMemNet 16.61 16.50 18.47 1.17 14.99 27.31 54.70 0.06 SRN -12.33 ---EmbedKGQA 28.36 26.41 25.20 11.93 23.95 32.88 61.05 0.06 RGCN 35.07 34.00 27.61 30.03 35.85 41.91 65.88 0.00 RNN SPARQL 41.98 36.01 19.04 66.98 37.74 50.26 58.84 26.08 RNN KoPL 43.85 37.71 22.19 65.90 47.45 50.04 42.13 34.96 BART SPARQL 89.68 88.49 83.09 96.12 88.67 85.78 92.33 87.88 BART KoPL 90.55 89.46 84.76 95.51 89.30 86.68 93.30 89.59 BART KoPL CG 77.86 77.86 61.46 93.61 77.88 79.17 89.01 76.04 Human 97.50 97.24 95.65 100.00 98.18 83.33 95.24 100.00 Table 5: Accuracy of different models on KQA Pro test set.",
"We hope our dataset KQA Pro with KoPL and SPARQL annotations will help guide further research in Complex KBQA.",
"(2) More attention to literal and qualifier knowledge.",
"Existing models perform poorly in situations requiring comparison capability.",
"This is because they only focus the relational knowledge, while ignoring the literal and qualifier knowledge.",
"We hope our dataset will encourage the community to pay more attention to multiple kinds of knowledge in Complex KBQA.",
"(3) Generalization to novel questions and answers.",
"For zero-shot questions, current models all have a close to zero performance.",
"This indicates the models solve the questions by simply memorizing their training data, and perform poorly on generalizing to novel questions and answers.",
"We further use KQA Pro to test the ability of KBQA models to generalize to questions that contain novel combinations of the elements observed during training.",
"Following previous works, we conduct the productivity experiment (Lake and Baroni, 2018; Shaw et al., 2021), which focuses on generalization to longer sequences or to greater compositional depths than have been seen in training (for example, from a length 4 program to a length 5 program).",
"Specifically, we take the instances with short programs as training examples, and those with long programs as test and valid examples, resulting in three sets including 106,182/5,899/5,899 examples.",
"The performance of BART KoPL drops from 90.55% to 77.86%, which indicates learning to generalize compositionally for pretrained language models requires more research efforts.",
"Our KQA Pro provides an environment for further experimentation on compositional generalization.",
"To further understand the quality of logical forms predicted by the BART parser, we show a case in Fig. 4, for which the SPARQL and KoPL parsers both give wrong predictions.",
"The SPARQL parser fails to understand prior to David Lloyd George and gives a totally wrong prediction for this part.",
"The KoPL parser gives a function prediction which is semantically correct but very different from our generated golden one.",
"It is a surprising result, revealing that the KoPL parser can understand the semantics and learn multiple solutions for each question, similar to the learning process of humans.",
"We manually correct the errors of predicted SPARQL 6108 and KoPL and mark them in red.",
"Compared to SPARQLs, KoPL programs are easier to be understood and more friendly to be modified.",
"In this work, we introduce a large-scale dataset with explicit compositional programs for Complex KBQA.",
"For each question, we provide the corresponding KoPL program and SPARQL query so that KQA Pro can serve for both KBQA and semantic parsing tasks.",
"We conduct a thorough evaluation of various models, discover weaknesses of current models and discuss future directions.",
"Among these models, the KoPL parser shows great interpretability.",
"As shown in Fig. 4, when the model predicts the answer, it will also give a reasoning process and a confidence score (which is ommited in the figure for simplicity).",
"When the parser makes mistakes, humans can easily locate the error through reading the human-like reasoning process or checking the outputs of intermediate functions.",
"In addition, using human correction data, the parser can be incrementally trained to improve the performance continuously.",
"We will leave this as our future work.",
"This work is founded by the National Key Research and Development Program of China (2020AAA0106501), the Institute for Guo Qiang, Tsinghua University (2019GQB0003), Huawei Noah's Ark Lab and Beijing Academy of Artificial Intelligence."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"method",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"We release a new benchmark for lexical substitution, the task of finding appropriate substitutes for a target word in a context.",
"For writing, lexical substitution systems can assist humans by suggesting words that humans cannot easily think of.",
"However, existing benchmarks depend on human recall as the only source of data, and therefore lack coverage of the substitutes that would be most helpful to humans.",
"Furthermore, annotators often provide substitutes of low quality, which are not actually appropriate in the given context.",
"We collect higher-coverage and higher-quality data by framing lexical substitution as a classification problem, guided by the intuition that it is easier for humans to judge the appropriateness of candidate substitutes than conjure them from memory.",
"To this end, we use a context-free thesaurus to produce candidates and rely on human judgement to determine contextual appropriateness.",
"Compared to the previous largest benchmark, our SWORDS benchmark has 3 x as many substitutes per target word for the same level of quality, and its substitutes are 1 .",
"4 x more appropriate (based on human judgement) for the same number of substitutes.",
"Imagine you are writing the message I read an amazing paper today to a colleague, but you want to choose a more descriptive adjective to replace amazing.",
"At first you might think of substitutes like awesome and great, but feel that these are also unsatisfactory.",
"You turn to a thesaurus for inspiration, but among reasonable alternatives like incredible and fascinating are words like prodigious which do not quite fit in your context.",
"Ultimately, you choose to go with fascinating, but reaching this decision required a non-trivial amount of time and effort.",
"Research on lexical substitution (McCarthy, 2002; McCarthy and Navigli, 2007; Erk and Pad, 2008; Szarvas et al., 2013; Kremer et al., 2014; Melamud et al., 2015; Hintz and Biemann, 2016; Zhou et al., 2019; Arefyev et al., 2020) considers the task of replacing a target word in context with appropriate substitutes.",
"There are two widely-used English benchmarks for this task: SEMEVAL (McCarthy and Navigli, 2007) and COINCO (Kremer et al., 2014).",
"For both benchmarks, data was collected by asking human annotators to think of substitutes from memory.",
"Because lexical substitution was originally proposed as a means for evaluating word sense disambiguation systems (McCarthy, 2002), this data collection strategy was designed to avoid a bias towards any particular word sense inventory.",
"In this work, we consider a different use case for lexical substitution: writing assistance.",
"For this use case, we are interested in evaluating a system's ability to produce appropriate substitutes that are likely to be difficult for humans to think of.",
"We show that the data collection strategy used in past benchmarks yields low coverage of such uncommon substitutesfor our previous example, they might contain words like awesome and great, but miss words like incredible and fascinating.",
"Furthermore, we observe that these benchmarks have low quality , containing words like fun, which are easy to think of, but not quite appropriate in context.",
"We present SWORDS the Stanford Word Substitution Benchmarkan English lexical substitution benchmark that raises the bar for both coverage and quality (Table 1).",
"We collect SWORDS by asking human annotators to judge whether a given candidate word is an appropriate substitute for a target word in context, following the intuition that judging a given substitute is easier than producing that same substitute from memory.",
"To bootstrap a set of candidates for humans to annotate, we Context My favorite thing about her is her straightforward honesty.",
"look up target words in an existing context-free thesaurus.",
"1 Because a thesaurus might miss substitutes that would not typically be synonymous with the target word outside of the provided context (e.g. thought-provoking for amazing), we also include human-proposed candidates from the previous COINCO benchmark.",
"Determining whether a substitute is appropriate is intrinsically subjective.",
"To address this, we collect binary labels from up to ten annotators for each substitute, inducing a score for each substitute.",
"In COINCO , analogous scores are derived from the number of independent annotators who thought of a substitutehence, as we will show in Section 4, these scores tend to correspond more to ease-of-recollection than appropriateness.",
"In contrast, scores from SWORDS correspond to appropriateness, and also allow us to explicitly trade off coverage and quality, permitting more nuanced evaluation.",
"Our analysis shows that compared to COINCO , SWORDS has 3 x more substitutes per target word for the same level of quality, and its substitutes are 1 .",
"4 x more appropriate based on scores for the same number of substitutes.",
"We demonstrate that SWORDS is a challenging benchmark by evaluating state-of-the-art lexical substitution systems and large-scale, pre-trained language models including systems based on BERT (Devlin et al., 2019; Zhou et al., 2019) and GPT-3 (Brown et al., 2020).",
"In our evaluation, we find 1 Note that our use of a thesaurus makes SWORDS less appropriate for the original use case for lexical substitution: evaluating word sense disambiguation systems.",
"that humans substantially outperform all existing systems, suggesting that lexical substitution can be used as a downstream language understanding task for pre-trained models.",
"We release SWORDS publicly as a benchmark for lexical substitution, coupled with a Python library that includes previous benchmarks in a common format, standardized evaluation scripts for prescribed metrics, and reproducible re-implementations of several baselines.",
"2 2 Background We describe lexical substitution and briefly introduce two widely-used benchmarks: SEMEVAL (McCarthy and Navigli, 2007), the first benchmark, and COINCO (Kremer et al., 2014), the largest existing benchmark.",
"For a survey of other benchmarks, we refer readers to Kremer et al. (2014), Hintz and Biemann (2016), and Miller (2016).",
"Lexical substitution.",
"Lexical substitution is the task of generating a list of substitutes w (cid:48) that can replace a given target word w in a given context c (McCarthy, 2002): ( context c, target w ) [ substitute w (cid:48) ] .",
"The context c is one or more sentences where the target word w is situated.",
"The target word w is one word in the context, which is either manually chosen by humans (McCarthy and Navigli, 2007) or 2 SWORDS : github.com/p-lambda/swords All experiments reproducible on the CodaLab platform: worksheets.codalab.org/worksheets/0xc924392d555f4b4fbee47be92e3daa0b automatically selected based on the part-of-speech of the target word (Kremer et al., 2014).",
"The substitute w (cid:48) can be a word or phrase.",
"Note that the task of lexical substitution does not consider in-flection and does not involve grammar correction; all benchmarks contain lemmas as substitutes (e.g. run instead of ran).",
"SEMEVAL .",
"The first lexical substitution benchmark, SEMEVAL -2007 Task 10 (McCarthy and Navigli, 2007), contains 201 manually chosen target words.",
"For each target word, 10 sentences were chosen as contexts (mostly at random, but in part by hand) from the English Internet Corpus (Sharoff, 2006) and presented to five human annotators.",
"The five annotators were instructed to produce up to three substitutes from memory as a replacement for the target word in context that preserves the meaning of the original word.",
"This resulted in 12 , 300 labels in total with four substitutes per target word on average.",
"COINCO .",
"The previous largest lexical substitution benchmark, COINCO (Kremer et al., 2014), was constructed by first choosing 2474 contexts from the Manually Annotated Sub-Corpus (Ide et al., 2008, 2010).",
"Then, all content words (nouns, verbs, adjective, and adverbs) in the sentences were selected to be target words in order to reflect a realistic frequency distribution of target words and their senses.",
"Each target word was presented to six human annotators, who were asked to provide up to five substitutions or mark it as unsubstitutable.",
"All the annotators were instructed to provide (prefer-ably single-word) substitutes for the target that would not change the meaning.",
"This resulted in 167 , 446 labels in total and 7 .",
"2 substitutions per target word on average.",
"3 For the rest of the paper, we focus on COINCO (but not SEMEVAL ) as our benchmark is built on COINCO and it is the largest existing benchmark.",
"SWORDS is composed of context, target word, and substitute triples ( c, w, w (cid:48) ), each of which has a score that indicates the appropriateness of the substitute.",
"We consider a substitute to be acceptable if its score is greater than 50% (e.g. bolded words in Table 1) and unacceptable if the score is less than 3 The reported number in Kremer et al. (2014) is 167 , 336 and 10 .",
"71 , respectively.",
"The latter differs as they counted the same substitute multiple times when suggested by multiple humans, whereas we report the number of unique substitutes.",
"or equal to 50%.",
"Similarly, a substitute with a score greater than 0% is considered conceivable , and otherwise inconceivable .",
"Note that these terms are operational definitions for convenience, and different thresholds can be chosen for desired applications.",
"Improving quality.",
"In prior work, annotators were prompted to consider whether a substitute preserves the meaning (McCarthy and Navigli, 2007) or would not change the meaning (Kremer et al., 2014) of the target word.",
"Instead, we ask annotators whether they would actually consider using this substitute as the author of the original sentence.",
"We believe this wording encourages a higher standard.",
"In Section 4.1, we provide evidence that substitutes from SWORDS have higher quality than those from COINCO on average.",
"Improving coverage.",
"For prior benchmarks, annotators were asked to generate a list of substitutes from memory.",
"Psycholinguistic studies have shown that when humans are asked to predict the next word of a sentence, they deviate systematically from the true corpus probabilities (Smith and Levy, 2011; Eisape et al., 2020).",
"Thus, we may reasonably expect that asking humans to generate substitutes would similarly lead to systematic omissions of some appropriate substitutes.",
"We observe that prior benchmarks exclude many appropriate substitutes that are difficult for humans to think of (Section 4.2).",
"To address this limitation, we first obtain a set of candidate substitutes and then ask annotators to judge whether they would consider using a given candidate to replace the target word in the context.",
"That is, given a context c , target word w , and candidate substitute w (cid:48) , we ask humans to judge whether w (cid:48) is a good replacement for the target word: ( context c, target w, substitute w (cid:48) ) { 0 , 1 } , where a positive label 1 corresponds to I would actually consider using this substitute as the author of the original sentence, and a negative label 0 as the opposite.",
"As described in Section 3.2, we annotate a large pool of candidate substitutes to ensure high coverage of all possible substitutes.",
"We confirm that this increases coverage compared to COINCO in Section 4.2.",
"score defined as the number of annotators who produced w (cid:48) given the associated context c and target word w .",
"Instead, we define the score as the fraction of annotators who judged the w (cid:48) to be an appropriate replacement of w .",
"We argue that the previous definition of score reflects ease-of-recollection, but not necessarily appropriateness.",
"In Section 4.3, we show that our definition of score better represents the appropriateness of each substitute.",
"We collect substitutes and scores for a context and target word pair ( c, w ) via the following three steps.",
"Step 1: Select contexts, targets, and substitutes.",
"We use the subset of contexts and target words from COINCO .",
"Concretely, we start with the ( c, w ) pairs in COINCO and randomly select one w per c to annotate.",
"Here, the context c consists of three sentences, where the middle sentence has the target word w .",
"Next, we choose a set of candidate substitutes w (cid:48) to annotate for each ( c, w ) pair, as framing annotation as binary classification requires determining the set of candidate substitutes a priori.",
"We use human-generated substitutes from COINCO , then add substitutes suggested by a thesaurus (see Appendix A.2 for details).",
"In principle, candidate substitutes can be retrieved from any lexical resource or even sampled from a generative model, which we leave as future work.",
"By combining candidates from COINCO and the thesaurus, we increase the coverage of acceptable substitutes.",
"a list of candidate substitutes from the previous step, we collect three binary labels on each ( c, w, w (cid:48) )",
"triple (see Section 3.3 for details).",
"Then, we pass any substitute with at least one positive label to Step 3 and further collect fine-grained scores.",
"We show that the probability that an acceptable substitute gets incorrectly filtered out as an inconceivable substitute (three negative labels) is very low ( 0 . 8 %) in Section 4.4.",
"Step 3: Collect fine-grained scores.",
"In the final step, we collect seven more binary labels on the substitutes which received at least one positive label from Step",
"2. This yields a total of 10 binary labels for the substitutes.",
"We used Amazon Mechanical Turk (AMT) to crowdsource labels on substitutes.",
"Each Human Intelligence Task (HIT) contained a target word highlighted in the context and at most 10 candidate substitutes for the target word.",
"Each candidate substitute had three radio buttons for positive, negative, and abstain.",
"Annotators were asked to choose positive if they would actually consider using the substitute to replace the target word as the author of the context, negative if they would not consider using the substitute, and abstain if they do not know the meaning of the substitute.",
"We treated all abstain labels ( 1 . 24 % of total labels) as negative labels, thereby making it binary.",
"The benchmark includes abstain labels to maintain the option for them to be handled separately (e.g. excluded) in the future.",
"The interface, instructions, qualification conditions, and filtering criteria used for crowdsourcing can be found in Appendix B. 4 Data analysis Table 2 shows overall statistics of our benchmark.",
"SWORDS comprises a total of 1250 context and target word pairs ( 494 nouns, 448 verbs, 189 adjectives, 119 adverbs) and 71 , 813 total substitutes that have been labeled (including both acceptable and unacceptable substitutes).",
"For brevity, we defer an analysis of annotator agreement to Appendix C.1.",
"With our notion of acceptability, we first observe that 75 .",
"4 % of the substitutes from COINCO 4 are considered unacceptable, and 28 .",
"6 % of the substitutes are even inconceivable (receiving scores less than 50 % and 0 % from our human annotators).",
"Table 3 shows examples of substitutes that received relatively high scores under COINCO , yet were considered unacceptable under SWORDS .",
"With the same size as COINCO (by taking the subset of our benchmark with the highest scoring substitutes per target), the average score of the substitutes is 4 .",
"9 for SWORDS and 3 .",
"4 for COINCO , resulting in 1 .",
"4 x higher quality.",
"Furthermore, SWORDS minimizes the potential noise by having fine-grained scores to account for appropriateness (Section 4.3) as well as explicit inconceivable substitutes, which is useful for evaluation (Section 5.2).",
"We show that SWORDS achieves high coverage.",
"Among the conceivable substitutes in SWORDS , 14 .",
"4 % are only in COINCO (COINCO -only), 14 .",
"6 % are common to both COINCO and the thesaurus (COINCO Thesaurus), and 71 .",
"1 % are only from thesaurus (Thesaurus-only).",
"Among the acceptable substitutes, 24 % are from COINCO -only, 37 .",
"1 % are from COINCO Thesaurus, and 38 .",
"9 % are from Thesaurus-only.",
"This suggests that a substantial number of substitutes are not present in COINCO .",
"Overall, SWORDS contains 3 .",
"9 acceptable and 20 .",
"1 conceivable substitutes per target word on average, increasing those numbers by nearly 2x and 3x over COINCO , respectively.",
"In addition, we find that substitutes from COINCO -only are more likely to be common words whereas substitutes from Thesaurus-only are more likely to be rare words.",
"We compute the Zipf frequency (Speer et al., 2018) of each substitute based on the Google n -gram corpus (Brants and Franz, 4 For this analysis, we consider COINCO 's substitutes that are used and labeled under SWORDS . 2006) and threshold conceivable substitutes into three groups: uncommon ( 3 . 5 ), neutral, common ( > 4 . 5 ).",
"We observe that substitutes from COINCO -only are more likely to be common words ( 52 . 7 %) than those from Thesaurus-only ( 38 %).",
"On the other hand, the substitutes from Thesaurus-only tend to be more uncommon words ( 29 %) than those from COINCO -only ( 17 . 5 %).",
"We show that scores in SWORDS better reflect the appropriateness of each substitute compared to COINCO both quantitatively and qualitatively.",
"We find that if a substitute has a high score under COINCO (score > 1 ), it is likely to be acceptable under SWORDS (score > 50 %) almost all the time ( 99 . 6 %).",
"However, the converse does not hold: the acceptable substitutes under SWORDS have low scores (score 1 ) under COINCO half of the time ( 49 . 4 %).",
"Intuitively, this is because COINCO 's scores reflect the ease of producing the substitute from memory, whereas SWORDS 's scores reflect the appropriateness of the substitute.",
"Table 3 shows examples of context, target word, and substitute triples which received a low score from COINCO but a high score from SWORDS .",
"We show that the probability of an acceptable substitute falsely filtered out in Step 2 is very low.",
"To this end, we collected 10 additional labels on 100 context-target word pairs randomly selected from the test set, without reducing the pool of substitutes as in Step",
"2. By comparing the first three labels to the entire 10 labels, we find that 35.5% of substitutes without any positive labels in Step 2 could have received one or more positive labels if they were kept in Step",
"3. However, we find that 99 .",
"2 % of these substitutes were eventually considered unacceptable (judged by 10 labels), indicating that the probability of an acceptable substitute incorrectly filtered out in Step 2 is very low ( 0 . 8 %).",
"Figure 1 shows the score distribution of substitutes in SWORDS along with the source of substitutes: COINCO -only, COINCO Thesaurus, or Thesaurus-only.",
"Across scores, neither COINCO nor thesaurus completely dominates substitutes, and the overlap between COINCO and thesaurus is 5 1935 substitutes from COINCO -only, 972 from both, and 43 , 806 from thesaurus-only received a score of 0%.",
"We also find that SWORDS adds more substitutes for all the scores, although substitutes from the thesaurus tend to have a lower range of scores compared to those from COINCO .",
"Lastly, we observe that substitutes from COINCO roughly form a normal distribution, which suggests that even the substitutes provided by human annotators are controvertible, and that it is important to account for the intrinsically gradable nature of appropriateness with fine-grained scores.",
"In this section, we evaluate several methods on SWORDS .",
"The goals of this evaluation are threefold: (1) to prescribe our recommended evaluation practice for SWORDS , (2) to measure performance of existing large-scale pre-trained models and state-of-the-art lexical substitution systems, and (3) to measure human performance for the purpose of comparing current and future systems.",
"There are two primary evaluation settings in lexical substitution research: the generative setting (Mc-Carthy and Navigli, 2007) and the ranking setting (Thater et al., 2010).",
"In the generative setting, systems output a ranked list of candidate substitutes.",
"There are no restrictions on the number of candidates that a system may output.",
"In the ranking setting, systems are given all candidate substitutes from the benchmark (including those marked as unacceptable) and tasked with ranking them by appropriateness.",
"Here we primarily focus on the generative setting, as it is more relevant to writing assistance.",
"We defer our experiments on the ranking setting to Appendix D. 5.2 Evaluation metrics In a writing assistance context, we envision that lexical substitution systems would be used to suggest a limited number of substitutes to users (e.g. 10 substitutes as opposed to 100 ).",
"Hence, we consider evaluation metrics that examine the quality and coverage of the top-ranked substitutes from a system with respect to the substitutes that humans judged as acceptable (score > 50 %).",
"Specifically, we compute precision ( P k ) and recall ( R k ) at k 6 : P k = # acceptable substitutes in system topk # substitutes in system topk R k = # acceptable substitutes in system topk min( k, # acceptable substitutes ) Because we care about both quality (precision) and coverage (recall) when comparing systems, we report F k , the harmonic mean of P k and R k .",
"Likewise, we evaluate against the list of substitutes 6 Note that our definition of recall at k is non-standard; the min compensates for the fact that there are often fewer than k acceptable substitutes.",
"which humans judged as conceivable (score > 0 %).",
"P kc and R kc constitute precision and recall of systems against this larger candidate list, and F kc their harmonic mean.",
"Motivated by past work (Mc-Carthy and Navigli, 2007), we primarily examine performance for k = 10 and lemmatize system and reference substitutes during comparison.",
"We note that these metrics represent a departure from standard lexical substitution methodology, established by McCarthy and Navigli (2007).",
"Like P k and R k , the previously-used BEST and OOT metrics are also measures of precision and recall, but do not take advantage of the negative labels from our binary data collection protocol as no such labels existed in the earlier benchmarks.",
"Nevertheless, we report performance of all systems on these metrics in Appendix E as reference.",
"We evaluate both state-of-the-art lexical substitution systems and large-scale pre-trained models as baselines on SWORDS .",
"We reimplement the BERT-based lexical substitution system (BERT-LS) from Zhou et al. (2019), which achieves state-of-the-art results on past benchmarks.",
"As another lexical substitution system, we examine WORDTUNE (AI21, 2020), a commercial system which offers lexical substitution capabilities.",
"7 We also examine two large-scale pre-trained models adapted to the task of lexical substitution: BERT (Devlin et al., 2019) and GPT -3 (Brown et al., 2020).",
"To generate and rank candidates with BERT , we feed in the context with target word either masked (BERT-M) or kept intact (BERT-K), and output the top 50 most likely words according to the masked language modeling head.",
"Because the target word is removed, BERT-M is expected to perform poorlyits main purpose is to assess the relative importance of the presence of the target word compared to the context.",
"Note that both of these strategies for using BERT to generate candidates differ from that of BERT-LS, which applies dropout to the target word embedding to partially obscure it.",
"To generate candidates with GPT-3, we formulate lexical substitution as natural language generation (see Appendix D.5 for details).",
"lexical substitution systems evaluated on SWORDS .",
"We evaluate the performance of HUMANS using labels from a separate pool of annotators as described in Section 4.4.",
"Because this task is inherently subjective, this system represents the agreement of two independent sets of humans on this task, which should be thought of as the realistic upper bound for all metrics.",
"We consider the substitutes that have score > 0% from the separate pool of annotators as HUMANS 's substitutes in the generative setting.",
"We also consider both of the candidate sources, COINCO and THESAURUS , as oracle systems.",
"Each source contains a list of substitutes for every target word, and therefore can be viewed as a lexical substitution system and evaluated on SWORDS .",
"COINCO provides substitutes for a target word that were provided by (six) human annotators.",
"This can be thought of as a proxy for how humans perform on lexical substitution when recalling words off the top of their heads (as opposed to making binary judgements as in HUMANS ).",
"THESAURUS provides context-free substitutes for a target word (regard-less of their word senses) with the default ranking retrieved from the thesaurus.",
"This represents the context-insensitive ordering that a user of the same thesaurus would encounter.",
"Because these oracle systems only produce candidates which are guaranteed to be in SWORDS , they have an inherent advantage on the evaluation metrics over other systems.",
"Hence, to be more equitable to other systems, we additionally compute F 10 and F 10 c in a lenient fashionfiltering out model generated substitutes which are not in SWORDS (we refer to the setup without filtering as strict).",
"It is our intention that future systems should not use COINCO or THESAURUS in any way, as they leak information about the SWORDS benchmark.",
"Table 4 shows that the performance of all methods falls short of that of humans on all metrics.",
"We interpret this as evidence that SWORDS is a challenging benchmark, since strong (albeit unsupervised) baselines like BERT and GPT -3 do not reach parity with humans.",
"We also observe that two models (WORDTUNE and GPT -3) achieve higher F 10 than COINCO .",
"In other words, while all models perform worse than humans who are judging the appropriateness of substitutes (HUMANS ), some models appear to slightly outperform humans who Lenient Strict Model F 10 F 10 c F 10 F 10 c HUMANS * 51 .",
"are thinking of substitutes off the top of their head (COINCO ).",
"This implies that some lexical substitution models may already be helpful to humans for writing assistance, with room for improvement.",
"Overall, we find that there is no single system which emerges as the best on all metrics.",
"We note that, despite BERT-LS representing the state-of-the-art for past lexical substitution benchmarks, its performance falls short of that of commercial systems like GPT -3 and WORDTUNE on most criteria.",
"Also, the BERT -based methods output around 5 x as many candidates as the other models on average, thus having an inherent advantage in recall with the lenient criteria (see Table 7 in Appendix E).",
"In Table 4, we additionally report the performance of generative models by re-ranking their lists of substitutes using the best ranker from our candidate ranking evaluation, BERT (see Appendix D for details).",
"This procedure unilaterally improves performance for all systems on all metrics except for GPT",
"-3. 8 Hence, we speculate that improved performance on the ranking setting will be complementary to improved performance on the generative setting.",
"From a qualitative perspective, many of the systems we evaluate already produce helpful substi-8 We speculate that this is because GPT-3 produces many substitutes containing multiple word pieces, and mean-pooling several word pieces may result in lower-quality scores.",
"tutes (Table 5).",
"In examining errors, we find that BERT -based models and WORDTUNE tend to produce words that differ semantically from the target (e.g. league for zone).",
"Substitutes generated by GPT -3 are often repetitive (e.g. for zone, GPT -3 produced 64 substitutes, out of which only 13 were unique)we filter out duplicates before evaluating.",
"Finally, we observe that some systems produce appropriate substitutes which are not present in SWORDS (e.g. GPT -3 produces precinct for zone), indicating that SWORDS still has gaps in coverage.",
"However, the higher coverage and quality in SWORDS compared to past benchmarks still improves the reliability of our proposed evaluation.",
"As we already discussed previous lexical substitution benchmarks in Section 2 and models in Section 5, we use this section to draw connections to other related literature.",
"Word sense disambiguation.",
"The task of word sense disambiguation consists of selecting the intended meaning (i.e. sense) from the pre-defined set of senses for that word in a sense inventory.",
"The task of lexical substitution is closely related to word sense disambiguation, as many words are sense synonyms some of their senses are synonymous, but others are not (Murphy, 2010).",
"In fact, McCarthy (2002) proposed lexical substitution as an application-oriented word sense disambiguation task that avoids some of the drawbacks of standard word sense disambiguation, such as biases created by the choice of sense inventory (Kilgarriff, 1997).",
"Near-synonym lexical choice.",
"Words are often near-synonyms they can substitute for each other in some contexts, but not every context (DiMarco et al., 1993; Murphy, 2010).",
"SWORDS can be viewed as a collection of human judgments on when certain near-synonyms are substitutable in a given context.",
"The task of near-synonym lexical choice consists of selecting the original target word from a set of candidate words given a context where the target word is masked out (Edmonds and Hirst, 2002).",
"The candidate words are composed of the target word and its near-synonyms which are often retrieved from a lexical resource such as Hayakawa (1994).",
"In this task, systems are tested whether they can reason about near-synonyms and choose the best substitute that fits in the context, without knowing any direct semantic information about the Context The e-commerce free zone is situated in north Dubai, near the industrial free zone in Hebel Ali Substitutes in SWORDS sector (90%), district (90%), area (90%), region (70%), section (70%), range (60%), strip (60%), ground (50%), segment (50%), territory (50%), sphere (40%), realm (40%), place (30%), tract (30%), city (30%), belt (20%), circuit (20%), band (0%) Reference for F k ( 7 ) sector , district , area , region , section , range , strip Reference for F kc ( 17 ) sector , district , area , region , section , range , strip , ground, segment, territory COINCO ( 9 ) area , region , district , section , city, place, range , strip , territory THESAURUS ( 14 ) district , area , belt, territory, region , realm, sector , section , circuit, segment WORDTUNE ( 11 ) district , area , city, region , site, league, center, system, place, zona GPT -3 ( 13 ) district , area , territory, region , realm, sector , locality, section , quarter, precinct BERT-LS ( 50 ) belt, district , port, area , zones, city, park, center, strip , sector BERT-K ( 50 ) zones, district , area , city, belt, region , park, ville, site, sector BERT-M ( 50 ) zones, district , area , city, belt, territory, region , haven, park, site Table 5: Qualitative comparison of top 10 candidates generated by best systems.",
"Lexical and phrasal resources.",
"Lexical resources such as thesauri are often used to identify possible word substitutes.",
"WordNet (Fellbaum, 1998) is a widely used lexical resource for English that includes synonymy, antonymy, hypernymy, and other relations between words.",
"PPDB (Pavlick et al., 2015) includes both word-level and phrase-level paraphrase rules ranked by paraphrase quality.",
"These resources relate words and phrases in the absence of context, whereas lexical substitution requires suggesting appropriate words in context.",
"Paraphrase generation.",
"Work on sentence-level paraphrase generation considers a wide range of meaning-preserving sentence transformations, including phrase-level substitutions and large syntactic changes (Madnani and Dorr, 2010; Wieting and Gimpel, 2018; Iyyer et al., 2018; Hu et al., 2019).",
"Our work could be extended to phrases given appropriate methods for identifying target phrases and proposing candidate substitute phrases.",
"One benefit of focusing on word substitutions is that we can cover a large fraction of all appropriate substitutes, and thus estimate recall of generative systems.",
"Some word-level substitutions, such as function word variation and substitutions that rely on external knowledge, are also outside the scope of our work but occur in standard paraphrase datasets (Bhagat and Hovy, 2013).",
"Self-supervised pre-trained models.",
"The task of suggesting words given surrounding context bears strong resemblance to masked language modeling, which is commonly used for pretraining (De-vlin et al., 2019).",
"However, for lexical substitution, appropriate substitutes must not only fit in context but also preserve the meaning of the target word; thus, additional work is required to make BERT perform lexical substitution (Zhou et al., 2019; Arefyev et al., 2020).",
"Modeling human disagreement.",
"In SWORDS , we find considerable subjectivity between annotators on the appropriateness of substitutes.",
"For the task of natural language inference, recent work argues that inherent disagreement between human annotators captures important uncertainty in human language processing that current NLP systems model poorly (Pavlick and Kwiatkowski, 2019; Nie et al., 2020).",
"We hope that the fine-grained scores in SWORDS encourage the development of systems that more accurately capture the graded nature of lexical substitution.",
"We sincerely thank Frieda Rong, Nelson Liu, Stephen Mussmann, Kyle Mahowald, Daniel Jiang, and all reviewers for their help and feedback throughout this project.",
"We also thank OpenAI and Wordtune for allowing us to evaluate their systems.",
"This work was funded by DARPA CwC under ARO prime contract no.",
"W911NF-15-1-0462."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"method",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"objective",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other"
] |
[
"Sequence-to-sequence (seq2seq) network is a well-established model for text summarization task.",
"It can learn to produce readable content; however, it falls short in effectively identifying key regions of the source.",
"In this paper, we approach the content selection problem for clinical abstractive summarization by augmenting salient ontological terms into the summarizer.",
"Our experiments on two publicly available clinical data sets (107,372 reports of MIMIC-CXR, and 3,366 reports of OpenI) show that our model statistically significantly boosts state-of-the-art results in terms of ROUGE metrics (with improvements: 2.9% RG-1, 2.5% RG-2, 1.9% RG-L), in the healthcare domain where any range of improvement impacts patients' welfare.",
"Radiology reports convey the detailed observations along with the significant findings about a medical encounter.",
"Each radiology report contains two important sections: 1 FINDINGS that encompasses ra-diologist's detailed observations and interpretation of imaging study, and IMPRESSION summarizing the most critical findings.",
"IMPRESSION (usually couple of lines and thrice smaller than finding) is considered as the most integral part of report (Ware et al., 2017) as it plays a key role in communicating critical findings to referring clinicians.",
"Previous studies have reported that clinicians mostly read the IMPRESSION as they have less time to review findings, particularly those that are lengthy or intricate (Flanders and Lakhani, 2012; Xie et al., 2019).",
"In clinical setting, generating IMPRESSION from FINDINGS can be subject to errors (Gershanik et al., 2011; Brady, 2016).",
"This fact is especially crucial when it comes to healthcare domain where even 1 Depending on institution, radiology reports may or may not include other fields such as BACKGROUND .",
"the smallest improvement in generating IMPRESSION can improve patients' well-being.",
"Automating the process of impression generation in radiology reporting would save clinicians' read time and decrease fatigue (Flanders and Lakhani, 2012; Kovacs et al., 2018) as clinicians would only need to proofread summaries or make minor edits.",
"Previously, MacAvaney et al. (2019) showed that augmenting the summarizer with entire ontology (i.e., clinical) terms within the FINDINGS can improve the content selection and summary generation to some noticeable extent.",
"Our findings, further, suggest that radiologists select significant ontology terms, but not all such terms, to write the IMPRESSION .",
"Following this paradigm, we hypothesize that selecting the most significant clinical terms occurring in the FINDINGS and then incorporating them into the summarization would improve the final IMPRESSION generation.",
"We further examine if refining FINDINGS word representations according to the identified clinical terms would result in improved IMPRESSION generation.",
"Overall, the contributions of this work are twofold:",
"(i) We propose a novel seq2seq-based model to incorporate the salient clinical terms into the summarizer (3.2).",
"We pose copying likelihood of a word as an indicator of its saliency in terms of forming IMPRESSION , which can be learned via a sequence-tagger (3.1);",
"(ii) Our model statistically significantly improves over the competitive baselines on MIMIC-CXR publicly available clinical dataset.",
"To evaluate the cross-organizational transferability, we further evaluate our model on another publicly available clinical dataset (OpenI) (5).",
"Few prior studies have pointed out that although seq2seq models can effectively produce readable content, they perform poorly at selecting salient",
"content to include in the summary (Gehrmann et al., 2018; Lebanoff et al., 2019).",
"Many attempts have been made to tackle this problem (Zhou et al., 2017; Lin et al., 2018; Hsu et al., 2018; Lebanoff et al., 2018; You et al., 2019).",
"For example, Zhou et al. (2017) used sentence representations to filter secondary information of word representation.",
"Our work is different in that we utilize ontology representations produced by an additional encoder to filter word representations.",
"Gehrmann et al. (2018) utilized a data-efficient content selector, by aligning source and target, to restrict the model's attention to likely-to-copy phrases.",
"In contrast, we use the content selector to find domain knowledge alignment between source and target.",
"Moreover, we do not focus on model attention here, but on rectifying word representations.",
"Extracting clinical findings from clinical reports has been explored previously (Hassanpour and Lan-glotz, 2016; Nandhakumar et al., 2017).",
"For summarizing radiology reports, Zhang et al. (2018) recently used a separate RNN to encode a section of radiology report.",
"2 Subsequently, MacAvaney et al. (2019) extracted clinical ontologies within the FINDINGS to help the model learn these useful signals by guiding decoder in generation process.",
"Our work differs in that we hypothesize that all of the ontological terms in the FINDINGS are not equally important, but there is a notion of odds of saliency for each of these terms; thus, we focus on refining the FINDINGS representations.",
"Our model consists of two main components: (1) a content selector to identify the most salient ontological concepts specific to a given report, and (2) a summarization model that incorporates the identified ontology terms within the FINDINGS into the summarizer.",
"The summarizer refines the FINDINGS word representation based on salient ontology word representation encoded by a separate encoder.",
"The content selection problem can be framed as a word-level extraction task in which the aim is to identify the words within the FINDINGS that are likely to be copied into the IMPRESSION .",
"We tackle this problem through a sequence-labeling approach.",
"We align FINDINGS and IMPRESSION to obtain required data for sequence-labeling task.",
"To this end, let b 1 , b 2 , ..., b n be the binary tags over the FINDINGS terms x = { x 1 , x 2 , ..., x n } , with n being the length of the FINDINGS .",
"We tag word x i with 1 if it meets two criteria simultaneously: (1) it is an ontology term, (2) it is directly copied into IMPRESSION , and 0 otherwise.",
"At inference, we characterize the copying likelihood of each FINDINGS term as a measure of its saliency.",
"Recent studies have shown that contextualized word embeddings can improve the sequence-labeling performance (Devlin et al., 2019; Peters et al., 2018).",
"To utilize this improvement for the content selection, we train a bi-LSTM network on top of the BERT embeddings with a softmax activation function.",
"The content selector is trained to maximize log-likelihood loss with the maximum likelihood estimation.",
"At inference, the content selector calculates the selection probability of each token in the input sequence.",
"Formally, let O be the set of ontological words which the content selector predicts to be copied into the IMPRESSION : O = { o i | o i FU ( x ) p o i (cid:15) } (1) where FU ( x ) is a mapping function that takes in FINDINGS tokens and outputs word sequences from input tokens if they appear in the ontology (i.e., RadLex) 3 , and otherwise skips them.",
"p o i denotes the selection probability of ontology word o i , and (cid:15) [0 , 1] is the copying threshold.",
"We exploit two separate encoders: (1) findings encoder that takes in the FINDINGS , and (2) ontology encoder that maps significant ontological terms identified by the content selector to a fix vector known as ontology vector.",
"The findings encoder is fed with the embeddings of FINDINGS words, and generates word representations h .",
"Then, a separate encoder, called ontology encoder, is used to process the ontology terms identified by the content selector and produce associated representations h o .",
"where x is the FINDINGS text, O is the set of ontology terms occurring in the FINDINGS and identified by the content selector, h o = { h o 1 , h o 2 , ..., h ol } is the",
"3 RadLex version 3.10, http://www.radlex.org/ Files/radlex3.10.xlsx",
"word representations yielded from the ontology encoder.",
"Note that h ol called ontology vector is the last hidden state containing summarized information of significant ontologies in the FINDINGS .",
"Although de facto seq2seq frameworks implicitly model the information flow from encoder to decoder, the model should benefit from explicitly modeling the selection process.",
"To this end, we implement a filtering gate on top of the findings encoder to refine the FINDINGS word representations according to the significant ontology terms within the FINDINGS and produce ontology-aware word representations.",
"Specifically, the filtering gate receives two vectors: the word hidden representation h i that has the contextual information of word x i , and the ontology vector h ol including the overal information of significant ontology words within the FINDINGS .",
"The filtering gate processes these two vectors through a liner layer with Sigmoid activation function.",
"We then compute the ontology-aware word hidden representation h (cid:48) i , given the source word hidden representation h i and the associated filtering gate F i .",
"where W h is the weight matrix, b denotes the bias term, and (cid:12) denotes element-wise multiplication.",
"We use an LSTM network as our decoder to generate the IMPRESSION iteratively.",
"In this sense, the decoder computes the current decoding state s t = LSTM ( s t 1 , y t 1 ) , where y t 1 is the input to the decoder (human-written summary tokens at training, or previously generated tokens at inference) and s t 1 is the previous decoder state.",
"The decoder also computes an attention distribution a = Softmax( h (cid:48)(cid:62) Vs (cid:62) ) with h (cid:48) being the ontology-aware word representations.",
"The attention weights are then used to compute the context vector c t = (cid:80) ni a i h (cid:48) i where n is the length of the FINDINGS .",
"Finally, the context vector and decoder output are used to either generate the next token from the vocabulary or copy it from the FINDINGS .",
"MIMIC-CXR.",
"This collection (Johnson et al., 2019) is a large publicly available dataset of radiology reports.",
"Following similar report preprocessing as done in (Zhang et al., 2018), we obtained 107,372 radiology reports.",
"For tokeniza-tion, we used ScispaCy (Neumann et al., 2019).",
"We randomly split the dataset into 80%(85,898)-10%(10,737)-10%(10,737) train-dev-test splits.",
"OpenI.",
"A public dataset from the Indiana Network for Patient Care (Demner-Fushman et al., 2016) with 3,366 reports.",
"Due to small size, it is not suitable for training; we use it to evaluate the cross-organizational transferability of our model and baselines.",
"Ontologies.",
"We use RadLex, a comprehensive radiology lexicon, developed by Radiological Society of North America (RSNA), including 68,534 radiological terms organized in hierarchical structure.",
"We compare our model against both known and",
"state-of-the-art extractive and abstractive models.",
"LSA (Steinberger and Jezek, 2004): An extractive vector-based model that employs Sigular Value Decomposition (SVD) concept.",
"NeuSum (Zhou et al., 2018): A state-of-the-art extractive model that integrates the process of source sentence scoring and selection.",
"4 Pointer-Generator (PG) (See et al., 2017): An abstractive summarizer that extends ses2seq networks by adding a copy mechanism that allows for directly copying tokens from the source.",
"4 We use open code at https://github.com/ magic282/NeuSum with default hyper-parameters.",
"PG model that first encodes entire ontological concepts within FINDINGS , then uses the encoded vector to guide decoder in summary decoding process.",
"Bottom-Up Summarization (BUS) (Gehrmann et al., 2018): An abstractive model which makes use of a content selector to constrain the model's attention over source terms that have a good chance of being copied into the target.",
"5 4.3 Parameters and Training We use SCIBERT model (Beltagy et al., 2019) which is pre-trained over biomedical text.",
"We employ 2-layer bi-LSTM encoder with hidden size of 256 upon BERT model.",
"The dropout is set to 0.2.",
"We train the network to minimize cross entropy loss function, and optimize using Adam optimizer (Kingma and Ba, 2015) with learning rate of 2 e 5 .",
"For the summarization model, we extended on the open base code by Zhang et al. (2018) for implementation.",
"6 We use 2-layer bi-LSTM, 1-layer LSTM as findings encoder, ontology encoder, and decoder with hidden sizes of 200 and 100, respectively.",
"We also exploit 100d GloVe embeddings pretrained on a large collection of 4.5 million radiology reports (Zhang et al., 2018).",
"We train the network to optimize negative log likelihood with Adam optimizer and a learning rate of 0.001.",
"Table.",
"1 shows the ROUGE scores of our model and baseline models on MIMIC-CXR, with human-written IMPRESSIONS as the ground truth.",
"Our model significantly outperforms all the baselines 5 We re-implemented the BUS model.",
"on all ROUGE metrics with 2.9%, 2.5%, and 1.9% improvements for RG-1, RG-2, and RG-L, respectively.",
"While NEUSUM outperforms the non-neural LSA in extractive setting, the extractive models lag behind the abstractive methods considerably, suggesting that human-written impressions are formed by abstractively selecting information from the findings, not merely extracting source sentences.",
"When comparing Ont.",
"PG with our model, it turns out that indeed our hypothesis is valid that a pre-step of identifying significant ontological terms can improve the summary generation substantially.",
"As pointed out earlier, we define the saliency of an ontological term by its copying probability.",
"As expected, BUS approach achieves the best results among the baseline models by constraining decoder's attention over odds-on-copied terms, but still underperforms our model.",
"This may suggest that the intermediate stage of refining word representations based on the ontological word would lead to a better performance than superficially restricting attention over the salient terms.",
"Table.",
"3 shows the effect of content selector on the summarization model.",
"For the setting without content selector, we encode all ontologies within the FINDINGS .",
"As shown, our model statistically significantly improves the results on RG-1 and RG-2.",
"To further evaluate the transferability of our model across organizations, we perform an evaluation on OpenI with our best trained model on MIMIC-CXR.",
"As shown in Table.",
"2, our model significantly outperforms the top-performing abstractive baseline model suggesting the promising cross-organizational transferability of our model.",
"Although challenges remain to reach human parity for all metrics, 81%",
"(a), 82%",
"(b), and 80%",
"(c) of our system-generated Impressions are as good as human-written Impressions across different metrics.",
"While our approach achieves the best ROUGE scores, we recognize the limitation of this metric for summarization task (Cohan and Goharian, 2016).",
"To gain a better understanding of qualities of our model, we conducted an expert human evaluation.",
"To this end, we randomly sampled 100 system-generated Impressions with their associated gold from 100 evenly-spaced bins (sorted by our system's RG-1) of MIMIC-CXR dataset.",
"The Impressions were shuffled to prevent potential bias.",
"We then asked three experts 7 to score the given Impressions independently on a scale of 1-3 (worst to best) for three metrics: Readability.",
"understandable or nonsense; Accuracy.",
"fully accurate, or containing critical errors; Completeness.",
"having all major information, or missing key points.",
"Figure.",
"2 presents the human evaluation results using histograms and arrow plots as done in (MacAvaney et al., 2019), comparing our system's Impressions versus human-written Impressions.",
"The histograms indicate the distribution of scores, and arrows show how the scores changed between ours and human-written.",
"The tail of each arrow shows the score of human-written IMPRESSION , and its head indicates the score of our system's IMPRESSION .",
"The numbers next to the tails express the count of Impressions that gained score of s (cid:48) by ours and s by gold.",
"8 We observed that while there is still a gap between the system-generated and human-written Impressions, over 80% of our system-generated Impressions are as good 9 as the associated human-written Impres-7 Two radiologists and one medical student.",
"sions.",
"Specifically, 73% (readability), and 71% (accuracy) of our system-generated Impressions ties with human-written Impressions, both achieving full-score of 3; nonetheless, this percentage is 62% for completeness metric.",
"The most likely explanation of this gap is that deciding which findings are more important (i.e., should be written into Impression) is either subjective, or highly correlates with the institutional training purposes.",
"Hence, we recognize cross-organizational evaluations in terms of Impression completeness as a challenging task.",
"We also evaluated the inter-rater agreement using Fleiss' Kappa (Fleiss, 1971) for our system's scores and obtained 52% for readability, 47% for accuracy, and 50% for completeness, all of which are characterized as moderate agreement rate.",
"We proposed an approach to content selection for abstractive text summarization in clinical notes.",
"We introduced our novel approach to augment standard summarization model with significant ontological terms within the source.",
"Content selection problem is framed as a word-level sequence-tagging task.",
"The intrinsic evaluations on two publicly available real-life clinical datasets show the efficacy of our model in terms of ROUGE metrics.",
"Furthermore, the extrinsic evaluation by domain experts further reveals the qualities of our system-generated summaries in comparison with gold summaries.",
"We thank Arman Cohan for his valuable comments on this work.",
"We also thank additional domain expert evaluators: Phillip Hyuntae Kim, and Ish Talati."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"objective",
"objective",
"method",
"result",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"method",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"result",
"result",
"other",
"other"
] |
[
"The majority of current systems for end-to-end dialog generation focus on response quality without an explicit control over the affective content of the responses.",
"In this paper, we present an affect-driven dialog system, which generates emotional responses in a controlled manner using a continuous representation of emotions.",
"The system achieves this by modeling emotions at a word and sequence level using: (1) a vector representation of the desired emotion, (2) an affect regularizer, which penalizes neutral words, and (3) an affect sampling method, which forces the neural network to generate diverse words that are emotionally relevant.",
"During inference, we use a re-ranking procedure that aims to extract the most emotionally relevant responses using a human-in-the-loop optimization process.",
"We study the performance of our system in terms of both quantitative (BLEU score and response diver-sity), and qualitative (emotional appropriateness) measures.",
"Recent breakthroughs in deep learning techniques have had an impact on end-to-end conversational systems (Chen et al., 2017).",
"Current research is mainly focused on functional aspects of conversational systems: keyword extraction, natural language understanding, and pertinence of generated responses (Ilievski et al., 2018).",
"Although these aspects are indeed key features for building a commercial system, most existing solutions lack social intelligence.",
"Conversational systems could bene-fit from incorporating social intelligence by: (1) avoiding interaction problems that may arise when the system does not understand the user's request (e.g., inappropriate responses that cause user anger) (Maslowski et al., 2017), and (2) building rapport Both authors contributed equally to this work.",
"with the user (Strohkorb et al., 2016).",
"Our method makes such conversational systems more social by outputting responses expressing emotion in a controlled manner, without sacrificing grammatical correctness, coherence, or relevance.",
"Existing sequence-to-sequence (seq2seq) architectures, either recurrent(Sordoni et al., 2015; Serban et al., 2015), attention(Vaswani et al., 2017) or convolutional neural network (CNN)-based (Fan et al., 2018), do not provide a straightforward way to generate emotionally relevant output in a controlled manner.",
"We introduce EMOTIonal CONversational System (EMOTICONS), which generates emotion-specific responses.",
"It is based on novel contributions presented in this paper which fall in two main categories: explicit models which allow a controlled emotion-based response generation (e.g., methods based on emotion embeddings, affective sampling, and affective re-ranking), and implicit models with no direct control over the desired emotion (i.e., affective regularizer).",
"We show that EMOTICONS outperforms both the system proposed by Zhou et al. (2018) (current state of the art for our task) and the vanilla seq2seq in terms of BLEU score (Papineni et al., 2002) (improve-ment up to 7 . 7% ) and response diversity (improve-ment up to 52% ).",
"Additionally, we qualitatively evaluate the emotional content of the generated text (see example responses in Table 1).",
"The user study (22 people) demonstrates that EMOTICONS is able to generate grammatically correct, coherent, emotionally-rich text in a controlled manner.",
"Sequence-to-sequence (seq2seq) models have attracted a lot of attention in the past few years, especially in the fields of Neural Machine Translation (Sutskever et al., 2014; Bahdanau et al., 2014) and Neural Dialogue Generation (Sordoni et al., 2015; Vinyals and Le, 2015; Serban et al.,",
"2015).",
"Prior work has focused on designing architectures that lead to the best performance in terms of BLEU (Papineni et al., 2002) and Perplexity scores.",
"Most seq2seq models are based on gated recurrent neural networks, either Long Short Term Memory (LSTM) (Hochreiter and Schmidhu-ber, 1997) or Gated Recurrent Unit (GRU) (Serban et al., 2015), but in general it is difficult to conclude which gating mechanism performs better (Chung et al., 2014).",
"In our model, we use GRU because it has fewer parameters to optimize, and it is faster to train.",
"In order to overcome the problem of generating trivial or mundane responses, there have been developments in inference techniques for encoder-decoder systems.",
"Use of beam search has been shown to improve the general quality of generated answers, while Maximum Mutual Information (MMI) (Li et al., 2016) has improved the diversity of generated answers, leading to more meaningful output.",
"We build on these techniques during affective inference.",
"Emotion-based (affective) dialog generation systems have received increasing attention in the past few years.",
"Huang et al. (2018) use emotion tokens (special words in a dictionary representing specific emotions) at either the encoder or decoder side, forcing the decoder to output a sentence with one specific emotion.",
"Zhou et al. (2018) build their system using external and internal memory, where the former forces the network to generate emotional words, and the latter measures how emotional a generated sequence is compared to a target sequence.",
"Lubis et al. (2018) modeled emotions in Valence-Arousal (VA) space for response generation.",
"We extend this idea by using a Valence-Arousal-Dominance (VAD) Lexicon (Mohammad, 2018), as it has been shown by Broekens (2012) that the third dimension (Dominance) is useful for modeling affect.",
"Asghar et al. (2017) used the VAD Lexicon, but they let the neural network choose the emotion to generate (by maximizing or minimizing the affective dissonance) and their system cannot generate different emotional outputs for the same input, nor generate a specified emotion.",
"Our system (see overview in Figure 1) is divided into three main components: (1) Emotion Labeling automatic labeling of sentences according to the emotional content they express, using an emotion classifier (3.2.1); labeling of words with VAD Lexicon values (4.2), (2) Affective Training training of two seq2seq networks, which use an encoder-decoder setting.",
"The first network is trained with prompt-response pairs (S-T), whereas the second (used during Affective Inference) is trained with reversed pairs (T-S), (3) Affective Inference generation of many plausible responses, which are re-ranked based on emotional content.",
"Let V = { w 1 , w 2 , . . . , w | V | } be a vocabulary, and X = ( x 1 , x 2 , . . . , x |X| ) a sequence of words (e.g. a sentence).",
"We denote EX R 6 as an emotion vector representing a probability distribution over six emotions associated with the sequence X : EX = p anger p surprise p joy p sadness p fear p disgust Note that in this work we focus on six basic emotions proposed by Paul Ekman (Ekman et al., 1983) but the techniques we develop are general and can be extended to a more fine grained list of emotions.",
"X can be an input sequence, candidate response, final response, or target response (denoted respectively as S , RC , R final , R 0 ).",
"We introduce E 0 , which during training, is the representation of the emotion of the target response ( R 0 ).",
"During testing, E 0 indicates a desired emotion for the final response ( R final ), and can be set manually.",
"For Reversedseq2seq Affective Re-ranking Vanilla seq2seq: Of course.",
"example, in the case of anger', E 0 would be a one-hot vector with 1 at the first position, and 0 elsewhere.",
"In our work, we extend the standard seq2seq model (Sutskever et al., 2014), that predicts the final response R final = argmax RC p ( RC | S ) .",
"The proposed affective system aims to extend the inference mechanism by incorporating emotions encoded in E 0 : R final = argmax RC p ( RC | S, E 0 ) (1) 3.2 Affect Modeling We extend the standard seq2seq architecture by including emotion-specific information during the training and the inference.",
"A critical challenge in both generating and evaluating responses is a reliable assessment of emotional state.",
"We use two representations of emotion: (1) a categorical representation with six emotions (anger, surprise, joy, sadness, fear, disgust), and (2) a continuous representation in a VAD space.",
"The latter uses a VAD Lexicon introduced by Mohammad (2018), where each of 20k words is mapped to a 3D vector of VAD values, ranging from 0 (lowest) to 1 (highest) ( v [0 , 1] 3 ).",
"Valence measures the posi-tivity/negativity, Arousal the excitement/calmness, and Dominance the powerfulness/weakness of the emotion expressed by a word.",
"This expands the work of Lubis et al. (2018), who modeled emotions only in VA space.",
"In the following sections we describe different versions of the proposed model.",
"Affective training requires E 0 , the emotion representation of the target sequence.",
"In order to label all sentences of the corpus with E 0 , we use an Emotion Classifier by Witon et al. (2018).",
"The classifier predicts a probability distribution over class of six emotions.",
"The classifier predictions for Cornell Movie-Dialogs Corpus (Cornell) have been shown to be highly correlated with human predictions (Witon et al., 2018).",
"To explicitly generate responses with emotion, this version of the model includes an emotion embedding at the encoder side.",
"We feed the encoder with S (cid:48) = ( e SEE , s 1 , s 2 , . . . , s | S | ) , where e SEE = ASEEE 0 is an Emotion Embedding ( e SEE R 3 ), and ASEE R 3 6 is a mapping (learned during training) from E 0 into an emotion embedding space.",
"Another way of forcing an emotional output is to explicitly indicate the target emotion at every step in decoding along with other inputs.",
"Formally, the GRU hidden state at time t is calculated as h t = f ( h t 1 , r (cid:48) t ) with r (cid:48) t = [ r t 1 ; e SED ] , where e SED is defined similarly as e SEE .",
"It is worth noting that ASEE and ASED are different, which implies that the emotion embedding spaces they map to are also different.",
"Compared to a similar approach introduced by Huang et al. (2018), our solution enables the desired emotional content, E 0 , to be provided in a continuous space.",
"To model the word-level emotion carried by each sequence, we introduce an Affective Regularizer (AR), which expresses the affective distance between R final and R 0 , in the VAD space.",
"It forces the neural network to prefer words in the vocabulary that carry emotions in terms of VAD.",
"Mathematically, we extend the regular Negative Log Likelihood (NLL) loss with an affective regularizer, LAR : L = LNLL + LAR = log p ( R final |S ) + LVAD ( R final , R 0 ) LVAD ( R final , R 0 ) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) |R final | (cid:88) t =1 EVAD s t |R final | |R 0 | (cid:88) t =1 e VAD r 0 t |R 0 | (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) , where s t = softmax( h t ) ( s t R | V | ) is a confidence of the system of generating words w 1 , . . . , w | V | at time t and R .",
"e VAD x R 3 is a 3D vector representing emotion associated with a word x in VAD space (note that e VAD x is constant with respect to t ), and EVAD R 3 | V | is a matrix containing e VAD w v for all | V | words in the vocabulary: EVAD = (cid:104) e VAD w 1 ; . . . ; e VAD w | V | (cid:105) Intuitively, the regularizer penalizes the deviation of the emotional content of the generated response, R final , from the desired response, R 0 .",
"The emotional information carried by R final is the weighted sum of emotion representations e VAD w i for all words w i in the vocabulary, where the weights are determined by the confidence s t .",
"Sequential word generation allows sampling of the next word, based on the emotional content of the current incomplete sequence.",
"If some words in a sequence do not express the target emotion E 0 , other words can compensate for this by changing the final affective content, e.g., in a sentence I think that the cat really loves me!, the first 6 words are neutral, whereas the end of the sentence make it clearly express joy.",
"We incorporate this observation by explicitly generating the next word using an Adaptive Affective Sampling Method: log p ( RC |S , E 0 ) = |R C | (cid:88) t =1 log p ( r t | r <t , e r <t , h S , E 0 ) , p ( r t | r <t , e r <t , h S , E 0 ) = softmax g ( h t ) + (1 ) softmax v ( EVAD t ) , where g ( h t ) is a linear mapping from GRU hidden state h t to an output vector of size | V | , and 0 1 is learned during training.",
"The first term in Equation 3.2.5 is responsible for generating words according to a language model preserving grammatical correctness of the sequence, whereas the second term forces generation of words carrying emotionally relevant content.",
"EVAD t R 3 is a vector representing the remaining emotional content needed to match a goal ( EVAD 0 ) after generating all words up to time t .",
"It is updated every time a new word r t with an associated emotion vector e VAD r t is generated: EVAD t = EVAD t 1 e VAD r t 1 EVAD 0 = |R 0 | (cid:80) t =1 e VAD r 0 t , training MVADE 0 max length , inference where e VAD r 0 t is an emotion vector associated with words r 0 t in the target sequence, max length is a maximum length set for the seq2seq model, and MVAD R 3 6 is a mapping from six-dimensional emotion space into VAD space (every emotion has a VAD vector as introduced by Hoffmann et al. (2012), scaled to a range [0, 1]): a ng e r s u r p r i s e j oy s a dn e ss f ea r d i s gu s t (cid:34) 0 1 1 0 0 0 (cid:35) V MVAD = 1 1 1 0 1 0 .",
"v ( EVAD t ) is a vector, whose i-th component measures the potential remaining emotional content of the sequence in the case of choosing the i-th word w i :",
"In the following, we set a constant = 1 after generating the first max length / 2 words, as this setting ensures that the first generated words carry the right emotional content, while not sacrificing the grammatical correctness of the whole response.",
"This leads to an improvement in performance.",
"The methods described in the previous sections aim to improve the seq2seq training/sampling procedure.",
"We hypothesize that a good inference strategy is crucial for generating diverse and emotion-specific responses.",
"As Li et al. (2016) suggest, traditional objective functions, i.e., likelihood of a response given an input, can be improved by using an N -best list and MMI during inference.",
"We build upon this idea; our hypothesis is that by generating B diverse sequences and re-ranking the responses, we are more likely to infer one best emotion-specific response.",
"The B -best list is found using Beam Search of size B with length normalization.",
"In the MMI-bidi setting, Li et al. (2016) rank all responses found during beam search based on a score calculated as: R final = argmax RC p ( RC |S ) + p ( S|R C ) + |R C | , (2) where p ( S|R C ) is a model with the same architecture as p ( RC |S ) trained on reversed prompt-response pairs (T-S), and | RC | is the length of the candidate response, RC .",
"We modify this objective in the following form: R final = argmax RC p ( RC |S , E 0 ) + p ( S|R C ) + |R C | (cid:107) ERC E 0 (cid:107) , (3) where the last term penalizes the deviation of the emotional content, ERC , of the candidate response, RC , from the desired emotional content, E 0 .",
"The task is to find optimal values of parameters , and , which give the best responses in terms of grammatical correctness, diversity ( , ) and emotional content ( ) (see 5 and 6).",
"Cornell contains around 10K movie characters and around 220K dialogues (Danescu-Niculescu-Mizil and Lee, 2011).",
"OpenSubtitles2018 is a collection of translated movie subtitles with 3.35G sentence fragments (Tiedemann, 2009).",
"It has been filtered to get pairs of consecutive sequences (containing between 5 and 30 words), with respective timestamps within an interval of 5 seconds, that are part of a conversation of at least 4 turns.",
"The filtered dataset contains 2.5M utterances.",
"ASCII symbols are removed.",
"To restrain the vocabulary size and correct the typos, we use a default vocabulary of fixed size 42K words from spaCy.",
"Each word in the dataset is then compared with the vocabulary using the difflib library 2 in Python (algorithm based on the Levenshtein distance), and mapped to the most similar word in the vocabulary.",
"If no word with more than 90% of similarity is found, the word is considered a rare word or a typo, and is mapped to the out-of-vocabulary (OOV) word.",
"For Cornell, less than 1% of the unigrams are OOV.",
"The VAD lexicon may not have all the words in the vocabulary.",
"Based on the word similarity (us-ing difflib library), each word of the vocabulary is assigned a VAD value of the most similar word in the VAD lexicon.",
"If no word with more than 90% of similarity is found, a neutral VAD value ( v = [0 . 5 , 0 . 5 , 0 . 5] ) is assigned.",
"We compare our work to two different baselines: a vanilla seq2seq and the ECM introduced by Zhou et al. (2018).",
"For the external memory we use our affective dictionary and train the model using the default parameters provided by authors.",
"All the hyper-parameters have been optimized on the validation set using BLEU score (Papineni et al., 2002).",
"For the encoder, we use two-layer bidirectional GRUs (hidden size of 256 ).",
"The final hidden states from both directions are concatenated and fed as an input to the decoder of one-layer unidirectional GRUs (hidden size of 512 ).",
"The embedding layer is initialized with pre-trained word vectors of size 300 (Mikolov et al., 2018), trained with subword information (on Wikipedia 2017, UMBC web-base corpus and statmt.org news dataset), and updated during training.",
"We use ADAM optimizer (Kingma and Ba, 2014) with a learning rate of 0 .",
"001 for learning p ( RC |S , E 0 ) (resp.",
"0 .",
"01 for p ( S|R C ) ), which is updated by using a scheduler with a patience of 20 epochs and a decreasing rate of 0 .",
"5 .",
"The gradient norm is clipped to 5 .",
"0 , weight decay is set to 1 e 5 , and dropout (Srivastava et al., 2014) is set to 0 .",
"2 .",
"The maximum sequence length is set to 20 for Cornell and to 30 for OpenSubtitles.",
"The models have been trained on 94% , validated on 1% , and tested on 5% of the data.",
"2 https://docs.python.org/3/library/difflib.html Model C distinct-1 C distinct-2 OS distinct-1 OS distinct-2 C BLEU OS BLEUN o r e -r a nk Baseline 0.0305 0.1402 0.0175 0.1205 0.0096 0.094 ECM 0.0310 0.1412 0.0180 0.1263 0.0099 0.099 SEE 0.0272 0.1331 0.0170 0.1100 0.0110 0.093 SED 0.0303 0.1502 0.0189 0.1231 0.0128 0.103 WI 0.0316 0.1480 0.0175 0.1235 0.0129 0.100 WE 0.0310 0.1400 0.0195 0.1302 0.0098 0.095 WI + WE 0.0342 0.1530 0.0198 0.1300 0.0108 0.105 (+12.1%) (+9.1%) (+13.1%) (+7.9%) (+12.5%) (+11.7%) R e -r a nk MMI baseline 0.0379 0.1473 0.0200 0.1403 0.0130 0.105 EMOTICONS =0 0.0406 0.2030 0.0305 0.1431 0.0140 0.110 (+7.1%) (+37.8%) (+52.5%) (+2.0%) (+7.7%) (+4.8%) Table 2: Quantitative results: Results for all proposed models trained on Cornell (C) and OpenSubtitles (OS).",
"To evaluate language models, we use BLEU score (computed using 1 to 4 -grams), as it has been shown to correlate well with human judgment (Agarwal and Lavie, 2008).",
"Perplexity does not provide a fair comparison across the models: during the training of the baseline seq2seq model, we minimize the cross entropy loss (logarithm of per-plexity), whereas in other models (e.g., WI) we aim to minimize a different loss not directly related to perplexity (cross entropy extended with the affective regularizer).",
"Having more diverse responses makes the affective re-ranking more efficient, to evaluate diversity we count the number of distinct unigrams (distinct-1) and bigrams (distinct-2), normalized by the total number of generated tokens.",
"The performance of different models introduced in 3 are presented in Table 2.",
"MMI bas.",
"refers to a system that re-ranks responses based on Equation 2, where both p ( RC |S ) and p ( S|R C ) are baseline seq2seq models.",
"EMOTICONS is a system based on Equation 3, where p ( RC |S , E 0 ) is computed using a composition of Word-Level Implicit Model (WI) and Word-Level Explicit Model (WE), and p ( S|R C ) is computed using WI (as we are not interested in explicitly using the input emotion).",
"We optimize and on the validation set using BLEU score, since Li et al. (2016) have shown that adding MMI during inference improves the BLEU score.",
"We set = 0 and find optimal values opt = 50 .",
"0 and opt = 0 .",
"001 using grid search.",
"goal of our work, but the observed improvement (after adding emotions) shows that the different systems are able to extract and use emotional patterns to improve the general language model.",
"5.1 Response Diversity From Table 2, we observe that for both Cornell and OpenSubtitles datasets, SED, WI, and WE models outperform the vanilla seq2seq and the ECM for at least one of the two distinct measures.",
"SEE has the worst performance overall and does not compete with either the baseline, nor with SED.",
"This is expected according to the results reported by Huang et al. (2018).",
"It seems that the model is not able to capture the information carried by the additional emotion embedding token it is treated as just one additional word among 20 others.",
"SED makes better use of the emotion information, as it is used at each time step during decoding.",
"In addition, it is more natural to use these features during the decoding, since the emotion embedding represents the desired emotion of the response.",
"The combination of WI and WE performs best in terms of distinct-1 and distinct-2 measures among all models without re-ranking, yielding an improvement of up to 13 .",
"1% .",
"It suggests that the word level emotion models suit the seq2seq architecture better.",
"During training, both models are encouraged not only to match the target words, but also to promote less frequent words that are close to the target words in terms of VAD values (affective regularizer and affective sampling), fostering the model to generate more diverse responses.",
"improvement in diversity, but the relative improvement for OpenSubtitles (MMI bas. ) is smaller than the one reported by Li et al. (2016).",
"This could originate from the different data filtering and beam search strategy, and the fact the hyper-parameter optimization has been performed on Cornell.",
"EMOTICONS is a combination of WI + WE (best performing model) for p ( RC |S , E 0 ) and WI for p ( S|R C ) , it is better than MMI bas.",
"(up to 52 . 5% gain in distinct-1).",
"It is worth noting that we observe higher scores in terms of diversity for the reversed model p ( S|R C ) compared to the normal model p ( RC |S , E 0 ) , while training on Cornell.",
"We can explain this using the data distribution: distinct-2 is higher for the questions than for the answers ( 0 . 167 and 0 . 154 for Cornell, respectively).",
"Table 2 shows that, in general, introducing emotional features into the process of generating responses does not reduce the BLEU score.",
"To reduce the potential negative impact of choosing inappropriate first words in the sequence, we compute the BLEU score on the result of beam search of size 200.",
"For example, if the first word is I, the seq2seq models tend to generate a response I don't know with high probability, due to the high number of appearances of such terms in the training set.",
"In certain cases, like WI and SED, we observe an improvement.",
"Such an improvement is expected, since our model takes into account additional (affective) information from the target sequence during response generation.",
"The quantitative evaluation shows that EMOTICONS outperforms the baseline while adding the emotional features during response generation.",
"The re-ranking phase did not take into account the affective term ( = 0 in Equation 3).",
"Setting a different value would not necessarily improve any of the available metrics (e.g., BLEU score, diversity), as they do not explicitly take into account affective content in their definition.",
"In this section, we describe an optimization procedure, relying on human judgment, for finding the optimal value of .",
"Button (Broekens and Brinkman, 2013), a reliable affective tool for assigning emotions, which, to our knowledge, has never been used for estimating the emotional content of the generated responses.",
"In our experiment, the AffectButton lets users choose a facial expression from a continuous space (see Figure 3), that best matches the emotional state associated with the sequence, which is then mapped into the VAD space.",
"In order to conduct the experiment, we chose a pool of 12 annotators, who annotated a total of 400 sequences.",
"The prompts were randomly chosen from the test set of Cornell, among the 200 sequences that create the most diverse responses in terms of distinct-2.",
"The more diverse the responses are, the more likely we are to select a response carrying a desired emotion.",
"The responses for the prompts were generated using EMOTICONS where the target emotion was either fear, anger, joy, or surprise; the four corners of the AffectButton.",
"was randomly chosen among 20 uniformly sampled values in [0 , 10] .",
"In Figure 2, we present the difference between the VAD value according to the face assigned by the user, and the desired emotion for the response.",
"The average curve presents a global minimum at opt = 4 .",
"2 .",
"The system does not perform equally well at generating different emotions according to the human judgment.",
"On average, we observe lower values for joy compared to anger in Figure 2.",
"This phenomenon is expected, as in the re-ranking Model Grammatical User Preference Correctness Total Majority Vote MMI bas.",
"process ERC is estimated using the emotion classifier (Witon et al., 2018) which detects joy more accurately than anger ( 77% versus 57% ), surprise ( 62% ) and fear ( 69% ).",
"In this section, we qualitatively evaluate the emotional content and correctness of the responses generated by EMOTICONS = opt compared to the ones from MMI bas.",
"through a user study.",
"It consists of three different experiments which measure grammatical correctness, user preference, and emotional appropriateness.",
"For all experiments, we chose prompts from the test set of Cornell, for which the most diverse responses were created by MMI bas.",
"in terms of distinct-2.",
"We test EMOTICONS by generating responses according to four emotions: fear, anger, joy, and surprise (beam size of 200).",
"In this experiment, we used 40 prompts.",
"For each prompt, we generated 5 sentences (4 for EMOTICONS, and 1 for MMI bas. ) that were presented in a random order to 3 native English speakers.",
"They assigned either 0 (sentence grammatically incorrect), or 1 (sentence grammatically correct) for all sentences.",
"To measure the agreement across annotators, we calculate Fleiss' = 0 .",
"4128 , which corresponds to moderate agreement.",
"Our model does not substantially sacrifice the grammatical correctness of the responses (see Table 3).",
"In this setting, we quantify how likely the user is going to prefer the response generated by EMOTICONS compared to the one generated by MMI bas.",
".",
"We asked 18 annotators to choose their",
"fa-(a) Bas.",
"vorite response to the input query among eight proposed answers (top four responses coming from the MMI bas and 4 coming from EMOTICONS with the four different target emotions).",
"Each of 45 sentences were annotated by three different annotators.",
"Results of the experiment (Table 3) indicate that users strongly prefer EMOTICONS over MMI bas.",
".",
"In this experiment, we show that our model is able to generate emotions in a controlled manner.",
"For each of the 5 models, 22 users assign a face via the AffectButton.",
"We generate responses for 120 different prompts.",
"We keep the responses that were annotated with a VAD vector with the norm greater than 2 , corresponding to those expressing strong emotions.",
"We compute the average VAD vectors for the annotated sequences for each model, with corresponding AffectButton faces (Figure 3).",
"The majority of user-assigned faces have a high arousal value, which can be explained by the fact that users tend to click in one of the four corners of the AffectButton.",
"The majority of the faces represent an accurate portrayal of the desired emotion.",
"The poor performance of EMOTICONS at expressing surprise comes from the fact that (1) users often mismatch surprise with joy, leading to a neutral dominance value, and (2) surprise is one of the most difficult emotions to judge (see 6).",
"We have presented EMOTICONS, a system that can generate responses with controlled emotions.",
"The flexibility of the presented solution allows it to be used in any kind of neural architecture as long it fits the encoder-decoder framework.",
"Currently, EMOTICONS does not generate different emotions equally well.",
"Future work could include incorporating contextual information that would help EMOTICONS to better capture emotional content.",
"We would like to thank anonymous reviewers for their insightful comments.",
"Mubbasir Kapadia has been funded in part by NSF IIS-1703883, NSF S&AS-1723869, and DARPA SocialSim-W911NF-17-C-0098."
] | [
"abstain",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Bigrams (two-word sequences) hold a special place in semantic composition research since they are the smallest unit formed by composing words.",
"A semantic relatedness dataset that includes bigrams will thus be useful in the development of automatic methods of semantic composition.",
"However, existing relatedness datasets only include pairs of unigrams (single words).",
"Further, existing datasets were created using rating scales and thus suffer from limitations such as inconsistent annotations and scale region bias.",
"In this paper, we describe how we created a large, fine-grained, bigram relatedness dataset ( BiRD ), using a comparative annotation technique called BestWorst Scaling .",
"Each of BiRD's 3,345 English term pairs involves at least one bigram.",
"We show that the relatedness scores obtained are highly reliable (split-half reliability r = 0 . 937 ).",
"We analyze the data to obtain insights into bigram semantic relatedness.",
"Finally, we present benchmark experiments on using the relatedness dataset as a testbed to evaluate simple unsupervised measures of semantic composition.",
"BiRD is made freely available to foster further research on how meaning can be represented and how meaning can be composed.",
"The term semantic relatedness refers to the extent to which two concepts are close in meaning.",
"The ability to assess semantic relatedness is central to the use and understanding of language (Hutchison, 2003; Mohammad and Hirst, 2005; Huth et al., 2016).",
"Manual ratings of semantic relatedness are useful for:",
"(a) obtaining insights into how humans perceive and use language; and",
"(b) developing and evaluating automatic natural language systems.",
"Existing datasets of semantic relatedness, such as the one by Finkelstein et al. (2002), only focus on pairs of unigrams (single words).",
"However, the concept of semantic relatedness applies more generally to any unit of text.",
"Work in semantic representation explores how best to represent the meanings of words, phrases, and sentences.",
"Bigrams (two-word sequences) are especially important there since they are the smallest unit formed by composing words.",
"Thus it would be useful to have large semantic relatedness datasets involving bigrams.",
"Existing datasets also suffer from shortcomings due to the annotation schemes employed.",
"Except in the case of a few small but influential datasets, such as those by Miller and Charles (1991) and Rubenstein and Goodenough (1965), annotations were obtained using rating scales.",
"(Annotators were asked to give scores for each pair; usually on a discrete 0 to 5 scale.)",
"Rating scales suffer from significant known limitations, including: inconsistencies in annotations by different annotators, inconsistencies in annotations by the same annotator, scale region bias (annotators often have a bias towards a portion of the scale), and problems associated with a fixed granularity (Presser and Schuman, 1996).",
"BestWorst Scaling (BWS) is an annotation scheme that addresses these limitations by employing comparative annotations (Louviere, 1991; Cohen, 2003; Louviere et al., 2015; Kiritchenko and Mohammad, 2017).",
"Annotators are given n items at a time (an n -tuple, where n > 1 and commonly n = 4 ).",
"They are asked which item is the best (highest in terms of the property of interest) and which is the worst (least in terms of the property of interest).",
"1 When 1 At its limit, when n = 2 , BWS becomes a paired comparison (Thurstone, 1927; David, 1963), but then a much larger set of tuples need to be annotated (closer to N 2 ).",
"working on 4 -tuples, bestworst annotations are particularly efficient because each best and worst annotation will reveal the order of five of the six items (i.e., for a 4-tuple with items A, B, C, and D, if A is the best, and D is the worst, then A > B, A > C, A > D, B > D, and C > D).",
"It has been empirically shown that annotating 2 N 4 -tuples is sufficient for obtaining reliable scores (where N is the number of items) (Louviere, 1991; Kiritchenko and Mohammad, 2016).",
"Kiritchenko and Mohammad (2017) showed through empirical experiments that BWS produces more reliable and more discriminating scores than those obtained using rating scales.",
"2 In this paper, we describe how we obtained fine-grained human ratings of semantic relatedness for English term pairs involving at least one bigram.",
"3 The other term in the pair is either another bigram or a unigram.",
"We first selected a set of target bigrams AB (A represents the first word in the bigram and B represents the second word).",
"For each AB, we created several pairs of the form ABX, where X is a unigram or bigram.",
"As X's we chose terms from a diverse set of language resources: terms that are transpose bigrams BAwhere the first word is B and the second word is A (taken from occurrences in Wikipedia); terms that are related to AB by traditional semantic relations such as hypernymy, hyponymy, holonymy, meronymy, and synonymy (taken from WordNet); and terms that are co-aligned with AB in a parallel corpus (taken from a machine translation phrase table).",
"The dataset includes 3,345 term pairs corresponding to 410 ABs.",
"We refer to this dataset as the Bigram Relatedness Dataset (or, BiRD ).",
"We use BWS to obtain semantic relatedness by: (1) creating items that are pairs of terms, and (2) prompting four items (pairs) at a time and asking annotators to mark the pair that is most related and the pair that is least related.",
"Once the annotations are complete, we obtain real-valued scores of semantic relatedness for each pair using 2 See Kiritchenko and Mohammad (2016, 2017) for further details on BWS and its use in NLP applications.",
"In a separate project, the second author is developing a semantic relatedness dataset for unigrams using BWS (an order of magnitude larger than existing ones).",
"Project page: http://saifmohammad.com/WebPages/Relatedness.html simple arithmetic on the counts of how often an item is chosen best and worst (Orme, 2009; Flynn and Marley, 2014).",
"(Details in Section",
"3.) To evaluate the quality of BiRD we determine the consistency of the BWS annotations.",
"A commonly used approach to determine consistency in dimensional annotations is to calculate split-half reliability (Cronbach, 1951).",
"We show that our semantic relatedness annotations have a split-half reliability score of r = 0 .",
"937 , indicating high reliability, that is, if the annotations were repeated then similar scores and rankings would be obtained.",
"(Details in Section",
"4.) We use BiRD to",
"Examining Bigram Semantic Relatedness: Since very little work exists on the semantic relatedness of bigrams, several research questions remain unanswered, including: What is the distribution and mean of the semantic relatedness between a bigram and its",
"transpose?; What is the average semantic relatedness between a bigram and its",
"hypernym?; Are co-aligned terms from a phrase table a good source of term pairs to be included in a semantic relatedness dataset (specifically, do they cover a wide range of semantic relatedness",
"values)?; etc.",
"In Section 5, we present an analysis of BiRD to obtain insights into these questions.",
"Evaluating Semantic Composition: A common approach to evaluate different methods of representing words via vectors is through their ability to rank pairs of words by closeness in meaning (Pennington et al., 2014; Levy and Goldberg, 2014; Faruqui and Dyer, 2014).",
"BiRD allows for the evaluation of semantic composition methods through their ability to rank pairs involving bigrams, by semantic relatedness.",
"In Section 6, we present benchmark experiments on using BiRD as a testbed to evaluate various common semantic composition methods using various pre-trained word representations.",
"Specifically, we conduct experiments to gain insights into research questions such as: Which common mathematical operations for vector composition (e.g., vector addition, vector multiplication, etc.) capture the semantics of a bigram more",
"accurately?; Which of the two words in a noun phrase bigram (the head noun or the modifier) has greater influence on the semantics of the",
"bigram?; etc.",
"Contributions : The contributions of this work can be summarized as follows: We obtain fine-grained human ratings of semantic relatedness for 3,345 term pairs, each of which includes at least one bigram.",
"The other term in the pair is either another bigram or a unigram.",
"We use the comparative annotation technique BestWorst Scaling, which addresses the limitations of traditional rating scales.",
"This is the first time BWS has been used to create a dataset for semantic relatedness.",
"We show that the ratings obtained are highly reliable.",
"We analyse BiRD to obtain insights into semantic relatedness when it involves bigrams.",
"We also develop interactive visualizations that allow for easy exploration of the data.",
"(Available on the project webpage.) We present benchmark experiments on using BiRD as a testbed to evaluate methods of semantic composition.",
"The Bigram Relatedness Dataset, visualizations of the data, and the annotation questionnaire are made freely available through the project's webpage.",
"4 We hope that the new dataset will foster further research on how meaning is composed in bigrams, on semantic representation in general, and on the understanding of bigram semantic relatedness.",
"The annotation task described in this paper was approved by the National Research Council Canada's Research Ethics Board (protocol number 2018-72).",
"The board examines the proposed methods to ensure that they adhere to the required ethical standards.",
"Special attention was paid to obtaining informed consent and protecting participant anonymity.",
"Semantic Relatedness and Semantic Similarity Closeness of meaning can be of two kinds: semantic similarity and semantic relatedness.",
"Two terms are considered to be semantically similar if there is a taxonomic relationship 4 http://saifmohammad.com/WebPages/BiRD.html between them such as hyponymy (hypernymy), or troponymy.",
"Two terms are considered to be semantically related if there is any lexical semantic relation between themtaxonomic or non-taxonomic.",
"Semantically similar items tend to share a number of properties.",
"For example, apples and bananas (co-hyponyms of fruit) are both edible, they grow on trees, they have seeds, etc.",
"On the other hand, semantically related concepts may not have many properties in common, but there exists some relationship between them which lends them the property of being semantically close.",
"For example, surgeon and scalpel are semantically related as the former uses the latter for their work.",
"We focus on semantic relatedness in this work, not only because it is the broader class subsuming semantic similarity, but also because many psychology and neuro-linguistic studies have demonstrated the importance of semantic relatedness.",
"Notable among these are studies on semantic priming and fMRI studies that show that the human brain stores information in a thematic manner (based on relatedness) rather than based on similarity (Hutchison, 2003; Huth et al., 2016).",
"Word-Pair Datasets : Several semantic similarity and relatedness datasets involving unigram pairs (word pairs) exist.",
"Rubenstein and Goodenough (1965) and Miller and Charles (1991) provided influential but small English wordpair datasets with finegrained semantic similarity scores.",
"More recent larger datasets including hundreds of pairs were provided by Finkelstein et al. (2002) (for relatedness) and Hill et al. (2015) (for similarity).",
"Similar datasets exist in some other languages as well, such as the one by Gurevych (2006) and Panchenko et al. (2016) for relatedness.",
"However, none of these datasets include items that are bigrams.",
"Bigram Semantic Similarity Datasets : Mitchell and Lapata (2010) created a semantic similarity dataset for 324 bigram pairs.",
"The terms include adjectivenoun, nounnoun, and verbobject bigrams.",
"Annotators were asked to choose an integer between one and seven, indicating a coarse semantic similarity rating.",
"Turney (2012) compiled a dataset of 2,180 bigramunigram synonym pairs from WordNet synsets.",
"(The bigrams are either nounnoun or adjectivenoun phrases.)",
"Other pairs were created taking bigrams and words that do not exist in the same synsets.",
"He thus created a dataset of synonyms and non-synonyms.",
"In contrast to these datasets, BiRD has fine-grained relatedness scores.",
"Other Similarity Datasets : There exist datasets on the semantic similarity between sentences and between documents (Marelli et al., 2014; Agirre et al., 2014; Cera et al., 2017).",
"Those are outside the scope of this work.",
"Other Natural Language Datasets Created Using BWS : BWS has been used for creating datasets for relational similarity (Jurgens et al., 2012), word-sense disambiguation (Jurgens, 2013), wordsentiment intensity (Kiritchenko and Mohammad, 2016), wordemotion intensity (Mohammad, 2018b), and tweetemotion intensity (Mohammad and Kiritchenko, 2018).",
"The largest BWS dataset is the NRC Valence, Arousal, and Dominance Lexicon, which has valence, arousal, and dominance scores for over 20,000 English words (Mohammad, 2018a).",
"We first describe how we selected the term pairs to include in the bigram relatedness dataset, followed by how they were annotated using BWS.",
"Randomly selecting term pairs will result in most pairs being unrelated.",
"This is sub-optimal in terms of the human annotation effort that is to follow.",
"Further, since our goal is to create a gold standard relatedness dataset, we wanted it to include term pairs across the whole range of semantic relatedness: from maximally unrelated to maximally related.",
"Thus, a key challenge in term-pair selection is obtaining pairs with a wide range of semantic relatedness scores, without knowing their true semantic relatedness in advance.",
"In addition, we also wanted the dataset to satisfy the following criteria: For each target bigram AB we wanted to include several pairs of the form ABX, where X is a unigram or bigram.",
"Motivation: Applications of semantic relatedness, such as real-word spelling correction and textual entailment, often require judgments of the form is AB X 1 more related or less related than AB X 2 '.",
"There should exist some pairs ABX, such that X is BA and a common English bigram.",
"Motivation: This is useful for testing sensitivity of semantic composition models to word order.",
"The unigrams and bigrams should be commonly used English terms.",
"Motivation: Data annotation of common terms is expected to be more reliable.",
"Also, common terms are more likely to occur in application datasets.",
"There should exist pairs that are taxonomically related (i.e., semantically similar), for example, hypernyms, hyponyms, holonyms,",
"etc.; and there should exist pairs that are not taxonomically related but semantically related nonetheless.",
"Motivation: This increases dataset diversity.",
"We focus on noun phrases (adjectivenoun and nounnoun bigrams).",
"Motivation: Noun phrases are the most frequent phrases.",
"To pursue these criteria, we compiled a set of term pairs from three diverse sources (Wikipedia, WordNet, and a machine translation phrase table) as described below.",
"Wikipedia: We chose to collect our target bigrams from the English Wikipedia dump (2018).",
"5 The corpus was tagged with parts of speech (POS) using the NLTK toolbox.",
"6 For each of the adjectivenoun and nounnoun bigrams AB in the corpus, we checked to see if the bigram BA (its transpose) also exists in the corpus.",
"We will refer to such pairs of bigrams as transpose bigrams .",
"Only those transpose bigrams (AB and BA) were selected that were both noun phrases and where both AB and BA occur in the corpus with frequencies greater than a pre-chosen threshold t (we chose t = 30 ).",
"For a pair of transpose bigrams, the bigram with the higher frequency was chosen as AB and the bigram with the lower frequency was chosen as the corresponding BA.",
"The above process resulted in 4,095 transpose pairs (ABBA).",
"WordNet: Among the 4,095 ABs, 330 exist in WordNet version 3.0 (Fellbaum, 1998).",
"7 For each of these, we selected (when available) synonyms (at most five), a hypernym, a hyponym, a holonym, and a meronym from WordNet.",
"Translation Phrase Table: Word-aligned parallel corpora map words in text of one language to those in text of another language.",
"Often this can lead to more than one word/phrase in one language being mapped to a common word/phrase in the other language.",
"We will refer to such terms as being co-aligned .",
"Due to the nature of languages and the various forms that the same text can be translated to, co-aligned terms tend to include not just synonyms but also other semantically related terms, and sometimes even unrelated terms.",
"Thus, we hypothesize that it is beneficial to include pairs of co-aligned terms in a semantic relatedness dataset as they pertain to varying degrees of semantic relatedness.",
"We used an EnglishFrench phrase table from the Portage Machine Translation Toolkit (Larkin et al., 2010) to determine additional pairs ABX.",
"8 Specifically, for each ABF entry in the phrase table (where F is a French term) we keep the five most frequent English unigrams and the five most frequent English bigrams (other than AB) that are also aligned to F. Among the 4,095 ABs, 454 occurred in the phrase table.",
"This resulted in 3,255 ABX pairs in total (1,897 where X is a unigram, and 1,358 where X is a bigram).",
"Finally, we chose to filter the term pairs, keeping only those ABs that occurred in at least three unique pairs.",
"(So for a given AB, apart from the ABBA entry, there should be at least two other entries of the form ABX, generated using WordNet or the phrase table.)",
"We also manually examined the remaining entries and removed those with obscure terms.",
"The final master term pairs list consists of 3,345 ABX pairs in total (1,718 where X is a unigram, and 1,627 where X is a bigram), corresponding to 410 ABs.",
"Thus on average, each AB occurred in about 8 distinct pairs.",
"This is yet another aspect that makes BiRD unique, as existing datasets were not designed to include terms in multiple pairs.",
"Table 1 shows the number of adjectivenoun pairs, the number of nounnoun pairs, and the total number of pairs in BiRD.",
"(We grouped the hypernym and hyponym pairs into a common class, which we will refer to as the is-a pairs. Similarly we group the meronym and holonym pairs into a common class, which we will refer to as the part-whole pairs.) 8 French was chosen as it is close to English and there exist EnglishFrench parallel corpora of sufficient size.",
"As mentioned in the introduction, we use the comparative annotation method BestWorst Scaling (BWS) to obtain the annotations.",
"From the list of N = 3 , 345 term pairs, we generated 2 N = 6 , 690 distinct 4-tuples (each 4-tuple is a set of four term pairs) such that each term pair appears in roughly equal distinct tuples, and no term pair appears more than once in a tuple.",
"9 (Recall that past research has shown that generating 2N 4-tuples in this manner is sufficient for obtaining fairly reliable scores (Louviere, 1991; Kiritchenko and Mohammad, 2017; Mohammad,",
"2018a).) The annotators were presented with one tuple at a time and were asked to specify which of the four pairs is most close in meaning (or most related) and which term is the least close (or least related).",
"Detailed annotation instructions (with examples of appropriate and inappropriate responses) were provided.",
"Notably, we made it clear that if terms in the pair have several meanings, then the annotators should consider the meanings that are closest to each other.",
"We also asked the annotators to be mindful of word order (i.e., the meaning of a bigram AB may be different from the meaning of its transpose BA).",
"We set up the annotation task on the crowdsourcing platform, Figure Eight.",
"10 We did not collect personally identifiable information from the annotators.",
"The compensation that the annotators would receive was clearly stated.",
"We selected a pool of annotators fluent in English and with a history of high-quality annotations.",
"Annotators were told that they could annotate as many instances as they wished.",
"As mentioned in the Introduction, prior to the annotation, the planned procedure was approved by the National Research Council Canada's Research Ethics Board (protocol number 2018-72).",
"About 2% of the data was annotated beforehand by the authors.",
"These questions are referred to as gold questions.",
"Figure Eight interspersed the gold questions with the other questions.",
"If a crowd worker answered a gold question incorrectly, then they were immediately notified.",
"This served as an additional way to guide the annotators.",
"If an annotator's accuracy on the gold questions fell below 70%, then they were refused further annotation, and all of their annotations were discarded.",
"This served as a mechanism to avoid malicious annotations.",
"In the task settings for Figure Eight, we specified that we needed annotations from eight people for each 4-tuple.",
"11 In all, 57,482 pairs of best and worst responses were obtained from 427 annotators.",
"12 Annotation Aggregation: The final semantic relatedness scores were calculated from the BWS responses using a simple counting procedure (Orme, 2009; Flynn and Marley, 2014): For each term pair, the semantic relatedness score is the proportion of times the term pair was chosen as the best minus the proportion of times the term pair was chosen as the worst.",
"13 The scores were linearly transformed to the interval: 0 (lowest semantic relatedness) to 1 (highest semantic relatedness).",
"We refer to the final list of 3,345 English term pairs along with their scores for semantic relatedness as the Bigram Relatedness Dataset (BiRD) .",
"Table 2 summarizes key annotation statistics.",
"A commonly used measure of quality in dimensional annotation tasks is the reproducibility of the final scoresthe extent to which repeated independent manual annotations produce similar results.",
"To assess this reproducibility, we calculate average split-half reliability (SHR) (Cronbach, 1951) as follows: 11 Note that since each term pair occurs in eight different 4-tuples, it is involved in 8 8 = 64 bestworst judgments.",
"12 Gold questions were annotated more than eight times.",
"13 More complex optimization algorithms exist, such as those described in (Hollis, 2018); however, our past experiments showed that the simple counting procedure obtained the most reliable results.",
"The annotations for each 4-tuple are randomly split into two halves.",
"One set is put in bin 1 and another set in bin 2.",
"Next, two sets of semantic relatedness scores are produced independently from the two bins, 1 and 2, respectively.",
"Then the Pearson correlation between the two sets of scores is calculated.",
"If the annotations are of good quality, then the correlation between the two sets of relatedness scores will be high (closer to 1).",
"14 This process is repeated 100 times, and the correlations are averaged.",
"The last column in Table 2 shows the result.",
"An SHR of r = 0 .",
"9374 indicates high reliability.",
"Since very little prior work exists on the semantic relatedness of bigrams, several research questions remain unanswered, including:",
"If both AB and BA are common English bigrams, then what is the average semantic relatedness between AB and BA?",
"What is the range of semantic relatedness between a bigram and its hypernym or hyponym?",
"What is the average semantic relatedness of such pairs?",
"How do these averages and standard deviations vary with respect to the different semantic relations?",
"What is the distribution of semantic relatedness values for co-aligned terms?",
"We now present analyses of the relatedness dataset to obtain insights into these questions.",
"Figure 1 shows example adjectivenoun and nounnoun entries from BiRD.",
"Observe that for the term ageing population , the most related term is ageing society a co-aligned term in the phrase table.",
"(Other co-aligned terms have lower relatedness scores.)",
"The transpose bigram population ageing is also marked as highly related.",
"WordNet does not provide a synonym for ageing population .",
"For the term adult female , the WordNet synonym and the transposed bigram (BA) are marked as being most related.",
"Note that the WordNet-provided hyponym amazon is marked as less related (probably because that sense of amazon is rare).",
"BiRD can be examined 14 Scores close to 0 indicate no correlation.",
"for each individual relation and sorted by relatedness scores to determine other example pairs that seemingly should be closely related, but are not highly semantically related in the perception of the average English speaker.",
"These include pairs such as subject areadiscipline (WordNet synonym) and frying panspider (WordNet hyponym).",
"The ABBA pairs with low relatedness, such as law schoolschool law, home runrun home, and traffic lightlight traffic are especially useful in testing whether measures of semantic composition generate suitably different representations for the terms in such pairs.",
"Table 3 shows the average semantic relatedness scores as well as standard deviations for the term pairs from various sources.",
"15 Observe that, on average, the ABBA pairs and the ABWordNet synonym pairs are found to be the most related.",
"On average, the ABWordNet part-whole pairs and the ABphrase table co-aligned pairs have the lowest semantic relatedness scores.",
"The high average relatedness and low standard deviation ( \u0000 ) for the transpose bigrams, indicate that these pairs tend to be closely related to each other.",
"The standard deviation is markedly higher for the other sources of word pairs.",
"Manual examination of such pairs (especially those involving WordNet synonyms) revealed that this is often because one of the terms might be related to the other in a rare sense (such as in the amazon example).",
"The high standard deviations for hypernyms, hyponyms, meronyms, and holonyms, indicate that pairs connected by this relation in WordNet can still exhibit a wide range of semantic relatedness.",
"of the co-aligned pairs have semantic relatedness between 0.09 and 0.83 (a wide interval).",
"Manual examination revealed that the lowest score pairs were unrelated and the highest score terms were often synonymous.",
"Thus co-aligned pairs from phrase tables are indeed a good source of term pairs for a semantic relatedness dataset, since they include pairs with a wide variety of relatedness values.",
"A popular approach to represent word meaning in natural language systems is through vectors that capture the contexts in which the word occurs.",
"An area of active research is how these word vectors can be composed to create representations for larger units of text such as phrases and sentences (Mitchell and Lapata, 2010; Baroni and Zamparelli, 2010; Socher et al., 2012; Tai et al., 2015).",
"Even though there is a large body of work on how to represent the meanings of sentences (Le and Mikolov, 2014; Kiros et al., 2015; Lin et al., 2017), there is relatively less work on how best to compose the meanings of two words to represent the meaning of a bigram.",
"One reason for this is a lack of suitable evaluation resources.",
"A common approach to evaluate representations of unigrams is through their ability to rank pairs of words by closeness in meaning (Pennington et al., 2014; Levy and Goldberg, 2014; Faruqui and Dyer, 2014).",
"BiRD allows for the evaluation of semantic composition methods through their ability to rank pairs involving bigrams, by semantic relatedness.",
"Here, we present benchmark experiments on commonly used semantic composition methods by measuring their ability to rank the term pairs in BiRD by relatedness scores.",
"The underlying assumption is that the more accurately a method of semantic composition can determine the representation of a bigram, the more accurately systems can determine the relatedness of that bigram with other terms.",
"We focus on unsupervised approaches as we wanted to identify how well basic composition operations perform.",
"The applicability of BiRD is much broader though, and it can be used: (1) for evaluating the large number of proposed supervised methods of semantic composition; (2) for evaluating the large number of measures of semantic relatedness; (3) to study the mechanisms underpinning semantic composition; etc.",
"We leave those for future work.",
"We test three vector space models to obtain word representations: GloVe (Pennington et al., 2014), fastText (Grave et al., 2018), and a traditional model based on matrix factorization of a wordcontext co-occurrence matrix (Turney et al., 2011).",
"We test four mathematical composition operations: (1) vector addition, (2) element-wise vector multiplication, (3) tensor product with circular convolution (Widdows, 2008), and (4) dilation (Mitchell and Lapata, 2010).",
"In adjectivenoun and nounnoun bigrams, the second word usually plays a role of a head noun, and the first word is a modifier.",
"We test the performance of two baseline methods that do not employ vector composition: one that represents a bigram with the vector for the first word and one that represents a bigram with the vector for the second word.",
"Word representations : We use GloVe word embeddings pre-trained on 840B-token CommonCrawl corpus 16 and fastText word embeddings pre-trained on Common Crawl and Wikipedia using CBOW.",
"17 For the traditional model, we use the exact wordcontext co-occurrence matrix described in Turney et al. (2011).",
"18 They created the matrix from a corpus of 5 10 10 tokens gathered from university websites.",
"The rows correspond to terms (single words from WordNet) and the columns correspond to contexts (single words from WordNet appearing to the left or to the right of the term).",
"Each cell of the matrix is the positive pointwise mutual information between the term and the context.",
"The matrix is decomposed to U d d V > d ( d denotes dimensionality) via truncated singular value decomposition.",
"Word vectors are obtained from the matrix U d pd , where rows correspond to the d -dimensional 16 https://nlp.stanford.edu/projects/glove/ 17 https://fasttext.cc/docs/en/crawl-vectors.html 18 We thank Peter Turney for providing the data.",
"word vectors and p is the weight factor for singular values in d .",
"We set parameter p to 0.5, and the dimensionality of word vectors to d = 300 for all three vector space models.",
"Unsupervised Compositional Models: For a bigram w 1 w 2 , let u 2 R 1 d and v 2 R 1 d denote the vectors for words w 1 and w 2 , respectively.",
"Each of the methods below applies a different composition function f on the word vectors u and v to obtain the vector representation p for the bigram w 1 w 2 : p = f ( u, v ) : Addition (Salton and McGill, 1986): add the two word vectors ( p = u + v ).",
"Multiplication (Mitchell and Lapata, 2010): element-wise multiplication of the two vectors ( p = u \u0000 v , where p i = u i v i ).",
"Tensor product with convolution (Widdows, 2008): outer product of two vectors resulting in matrix Q ( q ij = u i v j ).",
"Then, circular convolution is applied to map Q to vector p .",
"This is equivalent to: p i = P j u j v i \u0000 j .",
"Dilation (Mitchell and Lapata, 2010): decompose v to parallel and orthogonal components to u , and then stretch the parallel component along u ( p i = v i P j u j u j + ( \u0000 \u0000 1) u i P j u j v j , where \u0000 is the dilation factor).",
"We set \u0000 = 2 .",
"For the two baseline experiments that do not employ vector composition, head only : p = v and modifier only : p = u .",
"Semantic Relatedness : The relatedness score for a term pair ABX in the Bigram Relatedness Dataset (BiRD) is computed by taking the cosine between the vectors representing AB and X, where X can be a unigram or a bigram.",
"Evaluation: As evaluation metric, we use the Pearson correlation of the relatedness scores predicted by a method with the gold relatedness scores in BiRD.",
"Some words in BiRD do not occur in some of the corpora used to create the word vectors.",
"Thus we conduct experiments on a subset of BiRD (3,159 pairs) for which word vectors exist for all models under consideration.",
"To determine if the differences between the correlation scores are statistically significant, we perform Steiger's Z significance test (Steiger, 1980).",
"Results: Table 4 shows the results.",
"Observe that among the three methods of word vector representations, the best results are obtained using fastText (wordcontext matrix factorization model being a close second).",
"Among the methods of semantic composition, the additive models perform best (for all three ways of representing word vectors).",
"The scores are statistically significantly higher than those of the second best (dilation).",
"The element-wise vector multiplication and tensor product with convolution perform poorly (even worse than the baseline methods).",
"These results differ substantially from the observations by Mitchell and Lapata (2010).",
"In particular, in their work the multiplication model showed the best results, markedly outperforming the addition model.",
"Our results are consistent with the findings of Turney (2012), where too the addition model performed better than the multiplication model.",
"It should be noted though that unlike BiRD which has scores for semantic relatedness, the Mitchell and Lapata (2010) and Turney (2012) datasets have scores for semantic similarity.",
"Further work is required to determine whether certain composition models are better suited for estimating one or the other.",
"Surprisingly, the baseline model that uses the vector for the modifier word obtains better results than the one that uses the vector for the head noun.",
"(The difference is statistically significant.)",
"To better understand this, we compute relatedness correlations using the weighted addition of the two word vectors ( p = u + (1 \u0000 ) v ), where is a parameter that we vary between 0 and 1, in steps of 0.1.",
"Figure 2 shows the results.",
"Observe that giving more weight (but not too much weight) to the modifier word than the head word is beneficial.",
"= 0 .",
"7 and = 0 .",
"8 produce the highest correlations.",
"These results raise further questions under what conditions is the role of the modifier particularly prominent, and why.",
"We leave that for future work.",
"We created a dataset with fine-grained human ratings of semantic relatedness for term pairs involving bigrams.",
"We used the comparative annotation technique BestWorst Scaling, which addresses the limitations of traditional rating scales.",
"We showed that the ratings obtained are highly reliable (high SHR, r = 0 . 937 ).",
"We analyzed the dataset to obtain insights into the distributions of semantic relatedness values for pairs associated through various relations such as WordNet assigned lexical semantic relations, transposed bigrams, and co-aligned terms in a parallel corpus.",
"We show that co-aligned terms can be related to varying degrees (from unrelated to synonymous), thereby making them a useful source of term pairs to include in relatedness datasets.",
"Finally, we presented benchmark experiments on using BiRD as a testbed to evaluate various unsupervised methods of semantic composition.",
"We found that the additive models performed best and that giving more weight to the modifier word can improve results further.",
"We make BiRD freely available to foster further research.",
"In the short term, it will be interesting to explore the use of supervised semantic composition methods, including resources and models such as BERT (Devlin et al., 2018) and ELMo (Peters et al., 2018), to determine bigram relatedness.",
"We thank Peter Turney, Michel Simard, and Tara Small for helpful discussions.",
"This work is partially supported by the German Research Foundation (DFG) within the Research Training Group QuantLA (GRK 1763)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"method",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"result",
"result",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"result",
"method",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"When intelligent agents communicate to accomplish shared goals, how do these goals shape the agents' language?",
"We study the dynamics of learning in latent language policies (LLPs), in which instructor agents generate natural-language subgoal descriptions and executor agents map these descriptions to low-level actions.",
"LLPs can solve challenging long-horizon reinforcement learning problems and provide a rich model for studying task-oriented language use.",
"But previous work has found that LLP training is prone to semantic drift (use of messages in ways inconsistent with their original natural language mean-ings).",
"Here, we demonstrate theoretically and empirically that multitask training is an effective counter to this problem: we prove that multitask training eliminates semantic drift in a well-studied family of signaling games, and show that multitask training of neural LLPs in a complex strategy game reduces drift and while improving sample efficiency.",
"A major goal in the study of artificial and natural intelligence is to understand how language can scaffold more general problem-solving skills (e.g. Spelke, 2017), and how these skills in turn shape language itself (e.g. Gibson et al., 2017).",
"In NLP and machine learning, latent language policies (LLPs; Andreas et al., 2018) provide a standard framework for studying these questions.",
"An LLP consists of instructor and executor subpolicies: the instructor generates natural language messages (e.g. high-level commands or subgoals), and the executor maps these messages to sequences of low-level actions (Fig. 1).",
"LLPs have been used to construct interactive agents capable of complex reasoning (e.g. programming by demonstration) and planning over long horizons (e.g. in strategy games; Hu et al., 2019).",
"They promise an effective and interpretable interface between planning and control.",
"However, they present a number of challenges for training.",
"As LLPs employ a human-specified space of high-level commands, they must be initialized with human supervision, typically obtained by pretraining the executor.",
"On its own, this training paradigm restricts the quality of the learned executor policy to that exhibited in (possibly suboptimal) human supervision.",
"For tasks like the real-time strategy game depicted in Fig. 1, we would like to study LLPs trained via reinforcement learning (RL) , jointly learning from a downstream reward signal, and optimizing both instructors and executors for task success rather than fidelity to human teachers.",
"Training LLPs via RL has proven difficult.",
"Past work has identified two main challenges: primarily, the LLP-specific problem of semantic drift , in which agents come to deploy messages in ways inconsistent with their original (natural language) meanings (Lewis et al., 2017; Lee et al., 2019); secondarily, the general problem of sample inef-ficiency in RL algorithms (Kakade et al., 2003; Brunskill and Li, 2013).",
"Model-free deep RL is particularly notorious for requiring enormous amounts of interaction with the environment (Munos et al., 2016; Mnih et al., 2013b).",
"For LLPs to meet their promise as flexible, controllable, and understandable tools for deep learning, better approaches are needed to limit semantic drift and perhaps improve sample efficiency.",
"While semantic change is a constant and well-documented feature of human languages (McMa-hon and April, 1994), (human) word meanings are on the whole remarkably stable relative to the rate of change in the tasks for which words are deployed (Karjus et al., 2020).",
"In particular, disappearance of lexical items is mitigated by increased population size (Bromham et al., 2015) and increased frequency of use (Pagel et al., 2007).",
"Drawing on these facts about stabilizing factors in human language, we hypothesize that training of machine learning models with latent language variables can be made more robust by incorporating a population of instructors with diverse communicative needs that exercise different parts of the lexicon.",
"We describe a multitask LLP training scheme in which task-specific instructors communicate with a shared executor.",
"We show that complex long-horizon LLPs can be effectively tuned via joint reinforcement learning of instructors and executors using multitask training : Section 3 presents a formal analysis of LLP training as an iterated Lewis signalling game (Lewis, 1969).",
"By modeling learning in this game as a dynamical system, we completely characterize a class of simple policies that are subject to semantic drift.",
"We show that a particular multitask training scheme eliminates the set of initializations that undergo semantic drift.",
"Section 4 evaluates the empirical effectiveness of multitask learning in a real-time strategy game featuring rich language, complex complex dynamics, and LLPs implemented with deep neural networks.",
"Again, we show that multitask training reduces semantic drift (and improves sample efficiency) of LLPs in multiple game variants.",
"Together, these results show that diverse shared goals and communicative needs can facilitate (and specifically stabilize) learning of communication strategies.",
"Deep reinforcement learning (DRL) has recently made impressive progress on many challenging domains such as games (Mnih et al., 2013a; Silver et al., 2016), locomotion (Schulman et al., 2015) and dexterous manipulation tasks (Gu et al., 2016; Rajeswaran et al., 2017).",
"However, even state-of-the-art approaches to reinforcement struggle with tasks involving complex goals, sparse rewards, and long time horizons.",
"A variety of models and algorithms for hierarchical reinforcement learning have been proposed to address this challenge (Dayan and Hinton, 1993; Dietterich, 2000; Richard et al., 1999; Bacon et al., 2017) via supervised or unsupervised training of a fixed, discrete set of sub-policies.",
"Language can express arbitrary goals, and has compositional structure that allows generalization across commands.",
"Building on this intuition, several recent papers have explored hierarchical RL in which natural language is used to parameterize the space of high-level actions (Oh et al., 2017; Andreas et al., 2017; Shu et al., 2018; Jiang et al., 2019; Hu et al., 2019).",
"While there are minor implementation differences between all these approaches, we will refer to them collectively as latent language policies (LLPs) .",
"Like other hierarchical agents, an LLP consists of a pair of subpolicies: an instructor I ( m | o ) and an executor E ( a | m, o ) .",
"An LLP takes actions by first sampling a string-valued message m I from the instructor, and then an action a E from the executor.",
"For these messages to correspond to natural language , rather than arbitrary strings, policies need some source of information about what human language users think they mean.",
"This is typically accomplished by pretraining executors via human demonstrations or reinforcement learning; here we focus on the ingredients of effective joint RL of instructors and executors.",
"Reinforcement learning has been widely used to improve supervised language generation policies, particularly for dialogue (Li et al., 2016; Lewis et al., 2017), translation (Ranzato et al., 2015; Wu et al., 2016) and summarization (Stiennon et al., 2020).",
"Here, we instead focus on models where language is a latent variable as part of a hierarchical policy for a non-linguistic task.",
"As noted in Section 1, an observed shortcoming of reinforcement learning in all these settings is its susceptibility to semantic drift .",
"In the literature on human language change (Blank, 1999), semantic drift refers to a variety of phenomena, including specific terms becoming more general, general terms becoming specific, and parts coming to refer to wholes.",
"In machine learning, it refers broadly to the use of messages inconsistent with their natural language meanings in language-generation policies (Lazaridou et al., 2020).",
"Lee et al. (2019) mitigate semantic drift in pivot-based machine translation by using visual grounding, whereas Lu et al. (2020) periodically update a student model on data generated by an RL teacher.",
"Work in emergent communication has found that reinforcement learning tends not to learn policies with natural language-like properties (Kottur et al., 2017), although population-based training has been found to be helpful (Gupta et al., 2019).",
"Most relatedly to our work, Lazaridou et al. (2020) train speaker-listener agents jointly in a visual referential communication task and introduce auxiliary loss functions for stabilizing training.",
"Our work focuses on a more general setting where the interactions are temporally extended, have large action spaces and is partially observable.",
"Agarwal et al. (2019) use populations of agents to reduce semantic drift in visual dialogue.",
"We view the current paper's analysis of multitask learning as complementary to these approaches from the emergent communication literature; future work might consider ways of combining the two.",
"A great deal of recent work in both RL (e.g. Jaderberg et al., 2016; Shelhamer et al., 2016) and language processing (e.g. Clark et al., 2019; Gu-rurangan et al., 2020) has observed that carefully designed training objectives can serve as a source of model-agnostic inductive bias.",
"Our results bring these two lines of work together: multitask training improves the faithfulness and adaptability of learned language understanding models, even when optimizing for a downstream reward.",
"We begin our analysis with the simple signaling game depicted in Fig. 2. In this game, one agent receives an observation, then sends a message to another agent, which then performs an action.",
"Signaling games like this one are widely studied in NLP as models of reference resolution and language generation (Frank and Goodman, 2012).",
"The instructorexecutor pair may together be viewed as the simplest LLP of the kind described in Section 2. Formally, a (2-observation, 2-message) Lewis signalling game is defined by: a set of observations O = { o 1 , o 2 } a set of messages M = { m 1 , m 2 } a set of actions A = { a 1 , a 2 } a reward function R : O A R The game is played between two agents: a instructor (with parameters I ), which receives an observation and samples an observation-specific message from a distribution I ( m | o ; I ) ; and a executor (with parameters E ), which receives the instructor's message and uses it to sample an action from a distribution E ( a | m ; E ) .",
"The agents then receive a reward R ( o, a ) that depends on the observation and action but not on the message sent.",
"This policy's expected reward is given by: (cid:88) o O m M a A p ( o ) I ( m | o ; I ) E ( a | m ; E ) R ( o, a ) .",
"Gradient ascent on Eq.",
"(1) with respect to I and E (e.g. using a policy gradient algorithm; Williams, 1992) can be used to improve the expected reward obtained by an LLP.",
"As an example, the right portion of Fig. 2 shows two reward functions R and R (cid:48) .",
"In both, each observation is paired with a single action, and the executor must take the action corresponding to the observation to receive a positive reward.",
"For R , two strategies obtain the optimal expected reward of 1: one in which I ( m 1 | (cid:78) ) = 1 and E ( red | m 1 ) = 1 , and one in which I ( m 1 | (cid:4) ) = 1 and E ( blue | m 1 ) = 1 .",
"Almost every initialization of E and I (excluding a set of pooling equilibria ; see e.g. Huttegger et al., 2010) converges to one of these two strategies when agents are jointly trained to optimize Eq.",
"(1).",
"Semantic drift Suppose, as shown in Fig. 2, the messages m 1 and m 2 are not arbitrary symbols, but correspond to the natural language expressions m 1 = push red and m 2 = push blue .",
"In this case, only of the policies described above corresponds to the semantics of natural languagenamely, the one in which E ( a 1 | m 1 ) = E ( red | push red ) = 1 .",
"What is needed to ensure that a pair of agents playing converge to the natural language strategy?",
"I ( m j | o i ; I ) = (cid:40) I i = j 1 I otherwise (2) E ( a j | m i ) = (cid:40) E i = j 1 E otherwise (3)",
"For the game depicted in Fig. 2, we would like to avoid any outcome in which, after training, E = E ( red | push red ) < 12 .",
"More generally, let us assume that we have an initial set of executor parameters that are possibly suboptimal but correspond to natural language semantics in the sense that E ( a | m i ; (0) E ) > 12 if and only if the meaning of m is do a .",
"In this case, we will say that a parameter initialization ( (0) I , (0) E ) undergoes executor semantic drift if, after training, any such E ( a | m i ; E ) = E < 12 .",
"To analyze semantic drift in this game, we consider the final values of the parameters ( I , E ) when optimized from an initialization ( (0) I , (0) E ) .",
"For the reward function R depicted in Fig. 2, we can perform gradient ascent on Eq.",
"(1) with respect to I and E in this model by observing that: J I = E 1 2 J E = I 1 2 (4) By considering the limiting behavior of gradient ascent as step size goes to zero (a gradient flow ; see Appendix A), it is possible to give a closed-form expression for the value of these parameters as a function of time: Proposition 1. Suppose (0) E + (0) I < 1 .",
"Then two agents optimizing Eq.",
"(1) via Eq.",
"(4) undergo semantic drift (converging to E = 0 ).",
"Proof is given in Appendix A. Note in particular that semantic drift will occur whenever (0) I < 1 (0) E , which can occur even assuming a well-initialized executor with E > 12 .",
"Fig. 5 in the appendix provides a visualization of learning dynamics and these drift-susceptible initializations.",
"However, we will next show that this drift can be eliminated via multitask training.",
"Multitask signaling games Consider a multitask version of this game with the two reward functions R and R (cid:48) depicted in Fig. 2. As discussed in the introduction and depicted in Fig. 1, our approach to multitask training focuses on sharing a single executor E ( a | m ; E ) between multiple task-specific instructors, here I ( m | o ; I 1 ) and I ( m | o ; I 2 ) , both parameterized as in Eq.",
"(2).",
"As above, we train ( I 1 , I 2 , E ) jointly to optimize: (cid:88) o,m,a p ( o ) I ( m | o ; I 1 ) E ( a | m ; E ) R ( o, a ) + (cid:88) o,m,a p ( o ) I ( m | o ; I 2 ) E ( a | m ; E ) R (cid:48) ( o, a ) (5) We assume that the two instructors share the same initialization, with (0) I 1 = (0) I 2 = (0) I .",
"In this case, the following is true: Proposition 2. Suppose (0) E > 12 and (0) I 1 = (0) I 2 .",
"Then three agents optimizing Eq.",
"(5) via its gradient flow do not undergo semantic drift.",
"In fact, the eventual executor parameter ( t ) E is independent of the initial speaker parameters (0) I 1 and (0) I 2 .",
"Proof is again given in Appendix A. It is important to emphasize that these results concern the simplest possible policies for the signaling games considered here: agents with a single parameter which already bake in the assumption that different signals should trigger different behaviors.",
"We leave generalization of this formal analysis to general signaling games with more complex agents and message spaces for future work, noting that at least in this simple casewe have succeeded in constructing a concrete multitask objective that reduces (indeed eliminates) the set of initial model parameters subject to semantic drift.",
"We next verify whether this result extends to the complex LLP-learning tasks discussed in Section 2. Our focus in this section is the MINIRTS environment of Hu et al. (2019) (depicted in Fig. 1), in which agents must build and control an army of units like archers, spearmen, swordman, cavalry, and dragons, each with specialized abilities, with the goal of destroying the opponent's town center.",
"Using this game, Hu et al. (2019) crowdsourced a dataset of high-level instructions (like attack with dragon and send idle peasant to mine ) paired with low-level action sequences (Fig. 1).",
"They showed that an LLP trained on this supervised data via behavior cloning significantly outperformed a flat policy trained with imitation learning directly on low-level action sequences.",
"Here we investigate (1) whether these policy-cloned LLP can be further improved via reinforcement learning directly on a sparse winloss signal from the game, (2) whether we can improve sample efficiency during reinforcement learning by jointly training executor models on multiple game variants simultaneously through multitask learning, and (3) whether semantic drift can be avoided during multi-task training.",
"Below, Section 4.1, Section 4.2 and Section 4.3 provide more detail about the task, model, and training procedure.",
"Section 4.4 reports experimental results.",
"MINIRTS is a partially-observable real-time strategy game environment, in which the actions of a large number of units must be coordinated on long time scales to defeat an opposing player.",
"In a typical episode, a player must use its initial units to gather resources, use resources to build specialized structures for producing other units, and finally deploy these units to attack the opposing player's base.",
"This involves challenging problems in both low-level tactics (controlling the placement of individual units for resource-gathering and combat) and high-level strategy (deciding which unit types to build, and when to deploy them).",
"MINIRTS additionally features a dataset collected from pairs of humans playing collaboratively against rule-based opponents.",
"One human, the instructor , designs high-level strategies and describes them in natural language.",
"The other human, the executor observes the environment state as well as the natural language strategy descriptions from the instructor and selects appropriate low-level actions.",
"The dataset consists of 5,392 games, with a total of 76,045 (instruction, execution) pairs.",
"Hu et al. (2019) use the labeled data to train an LLP for the MINIRTS environment.",
"Our experiments use the same model architecture (Fig. 3), which we briefly review here; see the original for details.",
"Observation encoder The instructor and executor models condition on a fixed-sized representation of the current game state which are constructed using different encoders for various aspects of the game state (Fig. 3): Spatial input encoder : The spatial information of the map is encoded using a convolutional neural network.",
"Non-spatial input encoder : The non-spatial attributes and internal state of game objects are encoded using a simple MLP.",
"These include attributes like the number of enemy units, the agent's units, and resource locations.",
"Instruction encoder : The current instruction is encoded with a recurrent neural network.",
"Auxiliary encoder : Global variables, such as the total number of resources collected, are additionally encoded with an MLP.",
"Instructor model The instructor takes in the game state from the observation encoder and produces instructions.",
"The 500 instructions appearing most frequently in the training set are encoded with an RNN into a fixed-sized vector.",
"The score for each instruction is proportional to its dot product with the game state encoding.",
"This instructor model achieved the best performance on several metrics in the original work (Hu et al., 2019).",
"By restricting the instructor to the most frequent 500 well-formed natural language strings, we are able to focus our attention on semantic drift.",
"A generative model free to generate arbitrary strings might also be subject to syntactic drift.",
"Executor model The executor predicts an action for every unit controlled by the agent based on of the current observation as encoded by the various encoders.",
"The executor then predicts an action { } dim { dim { dim { dim # our units }",
"based on these features.",
"In particular, for each unit, it predicts one of the 7 action types (IDLE , CONTINUE , GATHER , ATTACK , TRAINUNIT , BUILDBUILDING , MOVE ), an action target location (for the MOVE , ATTACK , GATHER and BUILDBUILDING actions) and a unit type (for the TRAINUNIT and BUILDBUILDING actions).",
"Taking the product of the factorized action and location arguments across all units, the action space for the executor can be enormous, with as many as 10 24 distinct actions available on a single turn.",
"As mentioned above, the original work of Hu et al. (2019) (and other work on learning LLPs) focused on behavior cloning or independent supervision of the instructor and executor .",
"In the current paper, we are interested in the the dynamics of joint reinforcement learning of LLPs in both singleand multitask settings.",
"Experiments in Section 4.4 make use of models trained with all three strategies.",
"Rule-based opponent pool Agents are trained against a similar pool of rule-based bots (see Hu et al., 2019) used to collect the human data.",
"These bots follows a randomly selected, unit-specific strategy, building a fixed number of SWORDMEN , SPEARMEN , CAVALRY , ARCHERS or DRAGONS and attacking as soon as they are constructed.",
"Behavior cloning Behavior-cloned models are trained using the supervised MINIRTS dataset.",
"Given a collection of game observations o , each annotated with a high-level action m and a low-level action a , we maximize: max I , E (cid:88) o,m,a (cid:2) log I ( m | o ; I ) + log E ( a | m, o ; E ) (cid:3) .",
"During training, one frame is taken from every K frames to form the supervised learning dataset.",
"To preserve unit level actions for the executor training, all actions that happen in [ tK, ( t + 1) K ) frames are stacked onto the tK th frame. Reinforcement learning To train agents via reinforcement learning, we initialize them with the behavior cloning objective in Eq. (6) to provide an initial, human-meaningful grounding of message semantics (analogous to the initialization of the executor parameter (0) E in Section 3). We then fine-tune them on game success, providing a sparse reward of 1 when agents win the game, -1 when they lose or draw the game. As in Section 3, learned agents are trained on the game reward, using a proximal policy optimization (PPO) (Schulman et al., 2017) objective to optimize the expected reward: E ( s,a ) ( I,E ) R ( s, a ) . (7) Multi-task RL The original game of Hu et al. (2019) is defined by a set of attack multipliers : the aforementioned rock-paper-scissors dynamic arises because spearmen are especially effective against cavalry, archers against dragons, etc. To create alternative tasks in the MiniRTS environment, we create alternative versions of the game featuring different multipliers: e.g. making dragons invulnerable to archers or cavalry extra-effective against swordsmen. Table 1 shows these multipliers for the original rule, and a set of game variants with different multipliers are described in Appendix D. These variants are labeled BJ in the experiments that follow. Multiplier changes have significant effects on the optimal high-level strategy, affecting both which units are most effective overall, and how players should respond to opponents' choices. As in Section 3, we perform multitask LLP training in the MINIRTS environment by jointly optimizing expected reward across multiple game variants at once, assigning each variant its own set of instructor parameters I (initialized to the same value) but sharing a single set of executor parameters E across all contexts. The training pseudo-code can be found in Appendix C. 4.4 Experiments Unlike in the signaling game considered in Section 3, MINIRTS is complex, and we cannot take for granted that reinforcement learning of LLPs (with either ordinary or multitask objectives) will converge to an improved good solution at all. We thus begin with an analysis of policy performance and sample efficiency, then conclude this section with an analysis of semantic drift. (Model training details can be found in Appendix B.) 4.4.1 Evaluating performance and sample efficiency To evaluate policy quality and sample complexity, we compare the final win rate (against the fixed pool of rule-based agents) for the policy-cloned (BC), RL-tuned (RL joint ), and multitask-RL-tuned (RL multi ) agents described above. We perform this evaluation for multiple game configurations: original , with the same rules used by Hu et al. (2019) for human data collection and evaluation, and 3 alternative variants ( variant G , variant H , variant J ) , in which the relative strengths of various units has been modified (see Appendix D). We train 4 separate (RL joint ) agents corresponding to each of the environments and 2 (RL multi ) agents. Following D'Eramo et al. (2019), we provide both training strategies with a fixed budget of training experience across environments : both RL joint and RL multi have been trained on the same number of game episodes per training environment. We also present the win rates for RL joint and RL multi , when trained on 3 Attack multiplier Unit name Swordman Spearman Cavalry Archer Dragon SWORDMAN 1.0 1.5 0.5 1.0 0.0 SPEARMAN 0.5 1.0 1.5 1.0 0.0 CAVALRY 1.5 0.5 1.0 1.0 0.0 ARCHER 0.5 0.5 0.5 0.5 2.0 DRAGON 1.0 1.0 1.0 0.5 1.0 Table 1: Attack multipliers for the original game rules. For example, cavalry are extra-effective against swordsmen (1.5 in Swordsman col.); only archers and dragons can attack dragons (nonzero entries in Dragon col.). See Appendix D for other game variants' multipliers. more episodes per environment. Results are shown in Table 2. Both RL fine-tuning strategies allow the policy to significantly improve over the behavior-cloned initializer, showing that effective reinforcement learning of LLPs is possible in MINIRTS. In most environments, performance of the model fine-tuned with the multitask training is higher than ordinary joint RL training. When RL joint is provided extra training budget, it sometimes surpasses the performance of RL multi model with standard number of episodes. However, when RL multi is also given the extra training budget, it performs better in all but one environment. At a high level, these results indicate that multitask training of LLPs can be applied at small (and in some cases no) cost in accuracy and significantly less per-environment training cost. 4.4.2 Evaluating semantic drift Next, we consider the second question from the introduction: outside of performance effects, does multitask training with populations of instructors reduce semantic drift in executors? We present two Training Evaluation Win rate strategy environment (standard) (3 training) BC original 30.3 RL joint [orig.] 65.7 86.9 RL multi [orig., B, C] 76.5 90.6 BC variant G 11.6 RL joint [G] 73.0 74.1 RL multi [G, H, J] 75.7 77.6 BC variant H 26.2 RL joint [H] 82.2 91.4 RL multi [G, H, J] 79.4 83.5 BC 14.6 RL joint [J] variant J 87.2 93.0 RL multi [G, H, J] 91.2 93.7 Table 2: Evaluation of policy quality in MINIRTS.",
"Semantic drift In MINIRTS, executor semantic drift occurs when the executor performs actions that are not consistent with the instruction produced by the instructor.",
"( i.e., create spearman instruction produced by the instructor leads to the executor producing swordman instead).",
"In particular, this occurs in RL joint because the instructor and executor can co-adapt to new semantics during exploration as they are only trained to win the game.",
"Agent interoperability First, we evaluate the robustness of executor policies to alternative choices of instructors.",
"Specifically, we pair each RL-trained executor with an instructor trained either via behavior cloning (and thus guaranteed to implement the human annotators' semantics) or fine-tuning (RL instr ) on a different game variant from the executor (and thus not co-adapted with it).",
"Intuitively, to succeed at this task, executors must follow messages produced by the instructors trained in that domain.",
"Executors that have undergone less semantic drift should perform better when paired with these different instructors.",
"Results are shown in Table 3; here, it can be seen that multitask learning matches or exceeds the performance of single-task training on this evaluation of semantic drift in all rule variants studied, even when RL joint is provided additional training budget.",
"As evidence that performance comes from instructorexecutor pairs, Instructor Executor Eval.",
"rather than executors alone, using a random coach paired with RL multi [orig., B, C] on variant D gives 33.2% accuracy.",
"Additionally, when RL multi [orig., B, C] is paired with a coach from a different variant, we get an accuracy of just 41% on variant D. Low-level action semantics As an alternative means of gaining insight into learned behaviors, we can directly inspect the correspondence between instructor messages and executor actions.",
"We do this by uniformly sampling messages from a random instructor, then feeding them to the RL multi and RL joint executors and observing their choice of low-level actions a .",
"We then restrict these these ( m, a ) pairs to those in which (1) the text of m includes one of the words create, build, train or make and the name of a unit ( peasant , spearman , etc.) and (2) a is a TRAINUNIT action for any unit.",
"We then compute the empirical probability P ( unit 1 a | unit 2 m ) as shown in Fig. 4. If there is semantic drift, we expect to observe nonzero probability on the off-diagonal entries (the executor is building units different from those it is instructed to build).",
"RL multi places less probability mass on the off-diagonal entries compared to RL joint , consistent with less semantic drift.",
"In Fig. 4, one can also note that some word meanings change more than others.",
"We hypothesize that, this is because like in natural languages, environmental pressures cause the meanings of some words to change at a greater rate than others.",
"In this case, the dynamics of the game makes the spearman unit slightly stronger than the swordman unit overall.",
"This results in unexpectedly good performance for players who accidentally misinterpret swordman as spearman .",
"Therefore, this creates pressure for the conventional meaning of swordman to shift more than other units.",
"Taken together, these two evaluation results show that, when fine-tuning a policy initialized via imitation learning on the same objective, ordinary RL can be quite effective: the resulting executor model performs well even when paired with other instructors.",
"But as in Section 3, multitask training is even more helpful, especially by reducing semantic drift in both familiar and new environments.",
"We have presented a theoretical and empirical analysis of semantic drift and sample efficiency in multitask reinforcement learning of latent language policies (LLPs).",
"In a Lewis signaling game, we Figure 4: Messageaction drift in MINIRTS.",
"proved that multitask training can completely eliminate semantic drift.",
"In a two-player real-time strategy game, we showed that multitask training is effective at mitigating semantic drift, improves the quality of learned policies and is sample efficient.",
"Future work might integrate these results with other forms of population-based training (like those proposed by Gupta et al. (2019) for reference games) and explore other environmental factors affecting dynamics of language change in populations of learned agents.",
"We thank Hengyuan Hu for assistance in reproducing the original work and MIT Supercloud for compute resources.",
"We also thank Eric Chu, Alana Marzoev, Ekin Akyrek and Debby Clymer for feedback and proof-reading."
] | [
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"method",
"other",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other"
] |
[
"Thanks to the wealth of high-quality annotated images available in popular repositories such as ImageNet, multimodal language-vision research is in full bloom.",
"However, events, feelings and many other kinds of concepts which can be visually grounded are not well represented in current datasets.",
"Nevertheless, we would expect a wide-coverage language understanding system to be able to classify images depicting RECESS and REMORSE , not just CATS , DOGS and BRIDGES .",
"We fill this gap by presenting BabelPic, a hand-labeled dataset built by cleaning the image-synset association found within the BabelNet Lexical Knowledge Base (LKB).",
"BabelPic explicitly targets non-concrete concepts, thus providing refreshing new data for the community.",
"We also show that pre-trained language-vision systems can be used to further expand the resource by exploiting natural language knowledge available in the LKB.",
"BabelPic is available for download at http://babelpic.org .",
"There is growing research interest in developing effective systems capable of achieving some understanding of the content of an image.",
"As in most fields of applied AI, this requires annotated data to train a supervised system on.",
"While ImageNet 1 (Deng et al., 2009), one of the most influential projects in computer vision, was undeniably an important milestone towards image understanding, there is still a lot of ground to be covered.",
"ImageNet's initial aim was to collect pictures for most WordNet synsets (Miller, 1995).",
"Yet, at the time of writing, only some 21 , 841 nominal synsets are covered according to ImageNet's official website.",
"One issue with ImageNet and most other image repositories like COCO (Lin et al., 2014) and 1 http://www.image-net.org Flickr30kEntities (Plummer et al., 2015) is their focus on concepts denoting concrete, tangible things, such as CAT , TRAFFIC LIGHT and so on.",
"Concepts whose denotation is not clearly identifiable with a set of objects having distinct boundaries, such as events (e.g., FATALITY , COMPETITION ), emotions (e.g., SADNESS ) and psychological features (e.g., SHARPNESS ), have enjoyed less attention.",
"For lack of a better term, we will henceforth refer to them as non-concrete (NC) concepts.",
"On one hand, the inclusion of NC concepts would be an important step towards wide-coverage image semantic understanding.",
"On the other hand, it also goes in the same direction as recent multimodal language-vision approaches, e.g., mono-and cross-lingual Visual Sense Disambiguation (Barnard and Johnson, 2005; Loeff et al., 2006; Saenko and Darrell, 2008; Gella et al., 2016, 2019).",
"Taking into account NC concepts could also be of crucial importance for fascinating language-focused applications, such as Multimodal Machine Translation.",
"Last but not least, NC concepts would represent a significative benchmark for real-world multimodal applications.",
"In fact, traditional computer vision approaches rely on the detection of objects within the image, but many NC concepts are not well described by a bag of objects.",
"Consider, for instance, Figure",
"1. The two images illustrate different NC concepts (i.e., HIGH JUMP and POLE VAULT ) which are different configurations of the same elementary objects (i.e., PERSON , ROD , BLEACHERS ).",
"Thus, NC concepts require complex image understanding, integrating a fair amount of common sense knowledge.",
"As a contribution towards this goal of expanding the scope of research, we introduce BabelPic, the first dataset for multimodal language-vision tasks with a focus on NC concepts and that is also linked to WordNet.",
"BabelPic has been built by manually validating synset-image associations available in Figure 1: Two images described by the same bag of visual words but illustrating different NC concepts (i.e., high jump and pole vault).",
"BabelNet (Navigli and Ponzetto, 2012), a large multilingual resource linking WordNet to Wikipedia and other resources.",
"Furthermore, we provide a methodology to extend the BabelPic coverage to all the BabelNet synsets.",
"To this end, we adapt the recently introduced Vision-Language Pre-training (VLP) model (Zhou et al., 2020).",
"We define the verification of synset-image associations as a Visual Question Answering (VQA) task with two possible answers.",
"The evaluation demonstrates that our methodology achieves high performances on zero-shot classification as well, thus enabling verification across the inventory.",
"Thanks to the automatic production of a silver dataset, BabelPic constitutes a significant extension of ImageNet.",
"A few examples from BabelPic (both gold and silver) are shown in Figure",
"2. 2 Related Work To the best of our knowledge, no dataset of annotated images exists which has a focus on NC nominal and verbal concepts and is also linked to Lexical Knowledge Bases (LKB) such as WordNet and BabelNet.",
"For example, the very popular ImageNet dataset, which includes images belonging to around 21 , 800 categories organized according to the WordNet nominal hierarchy, offers only sparse coverage of NC concepts.",
"JFT (Hinton et al., 2015; Chol-let, 2017; Sun et al., 2017) is an internal dataset at Google containing 300M images annotated with over 19 , 000 classes including objects, scenes (e.g., SUNSET ), events (e.g., BIRTHDAY ) and attributes (e.g., RED ).",
"JFT differs from our work in not being linked to an LKB and in not being publicly released.",
"The Open Images dataset (Kuznetsova et al., 2018) contains 9M images annotated with 19 , 794 classes taken from JFT.",
"While Open Images does contain NC labels, the classes are not linked to an LKB, thus limiting their usefulness.",
"The Tencent ML-Images dataset (Wu et al., 2019) was created starting from a subset of ImageNet and Open Images and includes images annotated with 11 , 166 categories, which are then linked to WordNet synsets.",
"The dataset differs from our work since any NC label has been explicitly discarded.",
"Our work is in some sense similar to MultiSense (Gella et al., 2019) and VerSe (Gella et al., 2016), two datasets including images annotated with verbal senses.",
"However, MultiSense is not directly linked to an LKB and neither of these two datasets deals with nominal synsets.",
"Finally, we note that datasets including images annotated with object-level categories (Lin et al., 2014; Plummer et al., 2015) or videos (Loui et al., 2007; Dollar et al., 2009; Moneglia et al., 2014; Heilbron et al., 2015; Abu-El-Haija et al., 2016) are outside the scope of this work, since we are only interested in the main NC concepts depicted within images.",
"BabelPic is built by exploiting the link between WordNet (Miller, 1995) and Wikipedia within BabelNet 2 (Navigli and Ponzetto, 2012).",
"Our approach is organised in a three-step process.",
"First, we select a set of NC synsets from WordNet, on the basis of both their paradigmatic nature and relations in the knowledge base.",
"Second, we gather all the corresponding images in BabelNet, which are themselves mostly taken from Wikipedia pages.",
"Third, we manually validate the synset-images mapping.",
"Note that, having defined the task as a validation of concept-image associations, we do allow images to be mapped to more than one concept and vice versa.",
"For instance, both images in Figure 1 could be mapped to the concept COMPETITION as well.",
"The result is a gold dataset containing 2,733 synsets and 14 , 931 images.",
"We decided to build our gold dataset starting from concepts related to events and emotions because these have been shown to be the most appealing NC concepts for the multimodal and vision communities (see Section 2).",
"As a first step towards this goal, we select the nominal synsets belonging to the transitive closure of the hyponymy relation, rooted in the following set of WordNet synsets: { feeling.n.01 , event.n.01 } .",
"To ensure that only NC concepts are selected, we filter out any synset connected by the hypernymy relation to at least one of the following synsets: physical entity.n.01 , 2 https://babelnet.org Figure 2: A few examples from BabelPic, both gold (G) and silver (S).",
"shape.n.02 , color.n.01 .",
"This is done in order to discard concepts denoting tangible things that inherit from abstraction.n.06 in WordNet (e.g., THUNDERBOLT ).",
"Furthermore, we select all the synsets belonging to the following WordNet lexicographer files: verb.competition , verb.motion and verb.social .",
"This is done to create a dataset with an explicit focus on events, properties and verbs.",
"As a second step, we discard all the concepts belonging to either the mathematics or the physics domains since images are often not relevant (e.g., ROUNDING ).",
"Finally, we associate each selected synset with the first 15 corresponding images in BabelNet 4.0.",
"Note that, in order to improve the quality of the dataset, we filter out images on the basis of simple heuristics.",
"For example, we filter out all images where transparency is used and at least half of the pixels are white-coloured, as these are not likely be relevant.",
"Most of the noise images from Wikipedia are removed as a result of this step.",
"The synset-image associations found are manually validated during phase",
"3. We have decided to use the services of two expert annotators who are familiar with the BabelNet resource, and the whole annotation process is performed through an ad hoc graphical interface.",
"Annotators are shown tuples in the form (cid:104) s, l, g, i (cid:105) , where s is the target synset, i is a candidate image for s , and l and g are, respectively, the main lemma and gloss (i.e., definition) for s .",
"Annotators are asked to answer the question is i pertinent to g ?.",
"Possible answers are yes (i.e., i is an illustration of g ), no (i.e., i is either not pertinent or in contradiction with g ) and discard (i.e., i is a bad image).",
"To maximize coverage, each annotator is assigned roughly half of the concept-image association candidates.",
"However, in order to establish and agree on possible useful guidelines for the evaluation, annotators are asked to collaboratively perform the validation of a first sample of 500 instances.",
"We also provide them with a few extra directions.",
"For instance, we ask them to discard images in which the association cannot be verified without reading text depicted in the image.",
"In addition to this collaboratively annotated sample, we select an intersection of 100 annotation instances which we then use to obtain an inter-annotator agreement figure.",
"The level of agreement achieved is 80.39%, with a value of 0.6078 (moderate agreement).",
"As for these shared examples, we include in our gold dataset only those instances that have been approved by both annotators.",
"Our gold dataset is hence composed of all the validated synset-image associations.",
"Since manual validation is time consuming, we are interested in developing a methodology for the automatic verification of synset-image associations.",
"In the recent past there has been a great research effort to develop models for vision-language pretraining.",
"Many such models (e.g., VLP (Zhou et al., 2020), VisualBERT (Li et al., 2019), ViL-BERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019)) are built upon BERT (Devlin et al., 2019), a popular system for contextualized embeddings.",
"BERT-based models achieve state-of-the-art scores on many language-vision tasks, hence they represent a promising resource for our task.",
"The system that we use to perform classification is the fine-tuned VLP model.",
"Despite the fact that LXMERT (Tan and Bansal, 2019) achieves a slightly higher score on yes/no questions on the VQA 2.0 dataset (Goyal et al., 2017), our preference goes for the VLP system since it is pre-trained on a wider and more general dataset.",
"More specifi-cally, the VLP model is pre-trained on Conceptual Captions (CC) (Sharma et al., 2018), a dataset including more than 3M image-caption pairs, using two unsupervised vision-language tasks: bidirectional and sequence-to-sequence masked language prediction.",
"The input images are preprocessed using Faster R-CNN (Ren et al., 2015) pre-trained on Visual Genome (Krishna et al., 2017; Anderson et al., 2018), hence obtaining 100 object regions per image.",
"The model input consists of both class-aware region embeddings and word embeddings, the former obtained by combining the corresponding region features with the probability of each object label and region geometric information.",
"Furthermore, a Multi-Layer Perceptron (MLP) is trained during the fine-tuning phase in order to select the chosen answer starting from the hidden state of the encoder.",
"In order to adapt the VLP model to extend the BabelPic coverage to all the BabelNet synsets, we define the verification of synset-image associations as a VQA task with two possible answers.",
"More specifically, we define a question template as in the following: Does the image depict l ( g )? where l is the main lemma and g is the WordNet gloss of the target synset.",
"We instantiate our template for each synset-image pair in the dataset, thus obtaining a textual question for each instance.",
"We set the ground truth answers to either yes or no, hence reducing our classification task to VQA.",
"To test the reliability of our approach for the automatic verification of concept-image associations we experiment in a zero-shot setting (see Section 5.3).",
"As a first step toward this goal, we need to augment our dataset with negative instances (see Section 5.1) and select the most suitable VLP version (see Section 5.2).",
"A deeper analysis of how the sampling of negative instances affects the performances of the system is described in Section 5.4.",
"In order to evaluate our methodology for the automatic verification of synset-image associations, we need to define a procedure for the generation of negative instances (i.e., irrelevant (cid:104) synset , image (cid:105) pairs).",
"More specifically, we define a negative instance (cid:104) s, i (cid:105) by picking two different synsets s and s (cid:48) and an image i associated with s (cid:48) from our gold dataset.",
"Negative instances can be distinguished on the basis of the relation connecting s to s (cid:48) : Sibling: there exists a synset s (cid:48)(cid:48) in BabelNet s.t. both s and s (cid:48) are connected to s (cid:48)(cid:48) by the hypernymy relation (e.g., FUN RUN and MARATHON ).",
"Exploiting the WordNet relations as mentioned above is also very effective in handling any potential issue due to images that are instances of multiple concepts.",
"For instance, the images in Figure 1 could never be used as negative examples for COMPETITION because of the hyponymy relation connecting this concept to HIGH JUMP and POLE VAULT .",
"Moreover, we manually validated a sample of the negative examples in order to ensure the reliability of our methodology.",
"The result is a dataset which is perfectly balanced between the two output classes.",
"We split the dataset into training, validation and test sets following the 80%/10%/10% rule.",
"Each class is proportionally distributed between the splits, as well as the relations used to define the negative instances.",
"In order to test the system's capability to handle previously unseen concepts, we force both the validation and test sets to contain also instances referring to synsets that are not present in the training set.",
"We refer to the subset of the test set given by these instances as the zero-shot test.",
"Statistics are reported in Table",
"1. 5.2 Pre-Trained vs. Fine-Tuned In this work we refer to the VLP 3 model (Zhou et al., 2020) pre-trained on CC and fine-tuned for the VQA task on the VQA 2.0 dataset as, respectively, P-VLP and F-VLP.",
"Note that both P-VLP 3 https://github.com/LuoweiZhou/VLP Split N C I S(%) P(%) Training 23,891 2,618 13,311 10.20 1.95 Validation 2,986 1,442 2,740 10.18 1.98 Test 2,987 1,416 2,715 10.21 1.94 Zero-Shot 502 43 490 11.55 2.19 Table 1: Overview of the BabelPic's splits: number of instances (N), concepts (C), images (I) and distribution of instances labelled as sibling (S) and polysemy (P).",
"and F-VLP are then further fine-tuned for the verification of concept-image associations on BabelPic's training split.",
"Our experiments show that both systems are reliable on our task, achieving precision and F1 scores that are over 70% on all the splits (see Table 2).",
"However, the F-VLP model proves to be the most stable for the task.",
"In fact, in a common use case scenario it is more important to accept only correct synset-image associations than it is to detect all the correct pairs.",
"More specifically, we value precision over recall, and thus prefer the fine-tuned VLP model.",
"Our main interest is in developing a model capable of annotating images with synsets even when the target concept is new to the system (i.e., zero-shot).",
"As shown in the last column of Table 2, both the P-VLP and F-VLP models are robust to zero-shot classification, achieving scores that are comparable to the performances registered on the other splits.",
"The F-VLP system, in particular, is able to verify the associations between unseen synsets and images with precision 77.67%, hence enabling the automatic extension of BabelPic to any other synset.",
"Finally, we analyse the system performances on the different types of negative instances.",
"The accuracy scores achieved by F-VLP are listed in Table",
"3. As one would expect, when the input synset-image pair is unrelated , the system is able to correctly Relation Validation Test Zero-Shot Unrelated 83.98 83.63 89.01 Sibling 51.64 53.11 62.07 Polysemy 30.51 44.83 45.45 Table 3: Accuracy scores (as percentages) achieved by F-VLP on all the different types of negative instances.",
"classify most of the instances.",
"When considering the instances labelled as sibling , the difficulty level increases and F-VLP achieves an accuracy score of 62.07%.",
"This is not surprising when it is considered that discriminating between images representing sibling concepts (e.g., DISAPPOINTMENT and BOREDOM ) can be tricky for humans as well.",
"Finally, the instances labelled as polysemy prove to be the hardest ones, demonstrating that BabelPic can be an interesting benchmark for Visual Sense Disambiguation as well.",
"The performances achieved by P-VLP follow the same trend.",
"In this work we introduced BabelPic, a new resource for language-vision tasks, built by validating the existing image-to-synset associations in the BabelNet resource.",
"BabelPic is innovative in being the first dataset with a focus on nominal and verbal non-concrete concepts linked to the WordNet and BabelNet Lexical Knowledge Bases.",
"Furthermore, we presented a methodology to extend the resource by fine-tuning VLP, a state-of-the-art pre-trained language-vision architecture.",
"In our approach, we automatically verify the synset-image associations by exploiting the natural language definitions in WordNet, showing strong results on zero-shot classification as well.",
"We exploited our method for the automatic generation of a wide-coverage silver dataset containing around 10 , 013 synsets.",
"We make BabelPic (both gold and silver data) available to the community for download at http://babelpic.org .",
"The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union's Horizon 2020 research and innovation programme."
] | [
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"other",
"other"
] |
[
"Chinese is a logographic writing system, and the shape of Chinese characters contain rich syntactic and semantic information.",
"In this paper, we propose a model to learn Chinese word embeddings via three-level composition: (1) a convolutional neural network to extract the intra-character compositionality from the visual shape of a character; (2) a recurrent neural network with self-attention to compose character representation into word embeddings; (3) the Skip-Gram framework to capture non-compositionality directly from the contextual information.",
"Evaluations demonstrate the superior performance of our model on four tasks: word similarity, sentiment analysis, named entity recognition and part-of-speech tagging.",
"1 1 Introduction Distributed representations of words, namely word embeddings, encode both semantic and syntactic information into a dense vector.",
"Currently, word embeddings have been playing a pivotal role in many natural language processing (NLP) tasks.",
"Most of these NLP tasks also benefit from the pre-trained word embeddings, such as word2vec (Mikolov et al., 2013a) and GloVe (Pen-nington et al., 2014), which are based on the distributional hypothesis (Harris, 1954): words that occur in the same contexts tend to have similar meanings.",
"Earlier word embeddings often take a word as a basic unit, and they ignore compositionality of its sub-word information such as morphemes and character n-grams, and cannot competently handle the rare words.",
"To improve the performance of word embeddings, sub-word information has been employed (Luong et al., 2013; Qiu Corresponding author. 1 The source codes are available at https://github. com/HSLCY/VCWE et al., 2014; Cao and Rei, 2016; Sun et al., 2016a; Wieting et al., 2016; Bojanowski et al., 2016).",
"Compositionality is more critical for Chinese, since Chinese is a logographic writing system.",
"In Chinese, each word typically consists of fewer characters and each character also contains richer semantic information.",
"For example, Chinese character (rest) is composed of the characters for (person) and (tree), with the intended idea of someone leaning against a tree, i.e., resting.",
"Based on the linguistic features of Chinese, re-cent methods have used the character information to improve Chinese word embeddings.",
"These methods can be categorized into two kinds: 1) One kind of methods learn word embeddings with its constituent character (Chen et al., 2015), radical 2 (Shi et al., 2015; Yin et al., 2016; Yu et al., 2017) or strokes 3 (Cao et al., 2018).",
"However, these methods usually use simple operations, such as averaging and n-gram, to model the inherent compositionality within a word, which is not enough to handle the complicated linguistic compositionality.",
"2) The other kind of methods learns word embeddings with the visual information of the character.",
"Liu et al. (2017) learn character embedding based on its visual characteristics in the text classification task.",
"Su and Lee (2017) also introduce a pixel-based model that learns character features from font images.",
"However, their model is not shown to be better than word2vec model because it has little flexibility and fixed character features.",
"Besides, most of these methods pay less attention to the non-compositionality.",
"For example, the 2 the graphical component of Chinese, referring to https://en.wikipedia.org/wiki/Radical_(Chinese_characters) 3 the basic pattern of Chinese characters, referring to https://en.wikipedia.org/wiki/Stroke_(CJKV_character) semantic of Chinese word (sofa) cannot be composed by its contained characters (sand) and (hair).",
"In this paper, we fully consider the compositionality and non-compositionality of Chinese words and propose a visual character-enhanced word embedding model (VCWE) to learn Chinese word embeddings.",
"VCWE learns Chinese word embeddings via three-level composition: The first level is to learn the intra-character composition, which gains the representation of each character from its visual appearance via a convolutional neural network; The second level is to learn the inter-character composition, where a bidirectional long short-term neural network (Bi-LSTM) (Hochreiter and Schmidhuber, 1997) with self-attention to compose character representation into word embeddings; The third level is to learn the non-compositionality, we can learn the contextual information because the overall framework of our model is based on the skip-gram.",
"Evaluations demonstrate the superior performance of our model on four tasks such as word similarity, sentiment analysis, named entity recognition and part-of-speech tagging.",
"In the past decade, there has been much research on word embeddings.",
"Bengio et al. (2003) use a feedforward neural network language model to predict the next word given its history.",
"Later methods (Mikolov et al., 2010) replace feedforward neural network with the recurrent neural network for further exploration.",
"The most popular word embedding system is word2vec , which uses continuous-bag-of-words and Skip-gram models, in conjunction with negative sampling for efficient conditional probability estimation (Mikolov et al., 2013a).",
"A different way to learn word embeddings is through factorization of word co-occurrence matrices such as GloVe embeddings (Pennington et al., 2014), which have been shown to be intrinsically linked to Skip-gram and negative sampling (Levy and Goldberg, 2014).",
"The models mentioned above are popular and useful, but they regard individual words as atomic tokens, and the potentially useful internal structured information of words is ignored.",
"To improve the performance of word embedding, sub-word information has been employed (Luong et al., 2013; Qiu et al., 2014; Cao and Rei, 2016; Sun et al., 2016a; Wieting et al., 2016; Bojanowski et al., 2016).",
"These methods focus on alphabetic writing systems, but they are not directly applicable to logographic writing systems.",
"For the alphabetic writing systems, research on Chinese word embedding has gradually emerged.",
"These methods focus on the discovery of making full use of sub-word information.",
"Chen et al. (2015) design a CWE model for jointly learning Chinese characters and word embeddings.",
"Based on the CWE model, Yin et al. (2016) present a multi-granularity embedding (MGE) model, additionally using the embeddings associated with radicals detected in the target word.",
"Xu et al. (2016) propose a similarity-based character-enhanced word embedding (SCWE) model, exploiting the similarity between a word and its component characters with the semantic knowledge obtained from other languages.",
"Shi et al. (2015) utilize radical information to improve Chinese word embeddings.",
"Yu et al. (2017) introduce a joint learning word embedding (JWE) model and Cao et al. (2018) represent Chinese words as sequences of strokes and learn word embeddings with stroke n-grams information.",
"From another perspective, Liu et al. (2017) provide a new way to automatically extract character-level features, creating an image for the character and running it through a convolutional neural network to produce a visual character embedding.",
"Su and Lee (2017) also introduce a pixel-based model that learns character features from its image.",
"Chinese word embeddings have recently begun to be explored, and have so far shown great promise.",
"In this paper, we propose a visual character-enhanced word embedding (VCWE) model that can learn Chinese word embeddings from corpus and images of characters.",
"The model combines the semantic information of the context with the image features of the character, with superior performance in several benchmarks.",
"In this section, we introduce the visual character-enhanced word embedding (VCWE) model for Chinese word representation.",
"Given a Chinese word w consisting of n characters c 1 , , c n , its semantic may come from either its contained characters or its contexts.",
"Therefore, we use the two-level hierarchical composition to compose the word embedding, which further learned according to its context.",
"The overall architecture of our approach is on Figure 1. We first use a convolutional neural network (CNN) to model the intra-character compositionality of character from its visual shape information.",
"We use the output of CNN as the embeddings of the character.",
"Then the character embeddings are used as the input of the bidirectional LSTM network to model the inter-character compositionality.",
"After a self-attention layer, we can get the representation of the word.",
"Finally, based on the Skip-Gram framework, we learn the word embeddings with the visual character-enhanced embedding of the context.",
"Since the shape of a Chinese character provides rich syntactic and semantic information, the representation of a character can be composed by its intrinsic visual components.",
"Following the success of the convolutional neural network (CNN) (LeCun et al., 1995) in computer vision, we use CNN to directly model the natural composition of a character from its image.",
"We first convert each character into an image of size 40 40 , a deep CNN is used to fuse its visual information fully.",
"The specific structure of the CNN is shown in Figure",
"1(a), which consists of two convolution layers and one linear layer.",
"Each convolution layer is followed by a max pooling layer and a batch normalization layer.",
"The lower layers aim to capture the stroke-level information, and the higher layers aim to capture the radical-level and component-level information.",
"The output of CNN can be regarded as the representation of the character.",
"The character representation by its visual information can fully capture its intrinsic syntactic and semantic information with the intra-character compositionality.",
"The parameters of CNN are learned through backpropagation in end-to-end fashion.",
"After obtaining the representation of characters, we combine them into word embedding.",
"The word embedding need to capture the character-level compositionality fully.",
"Here, we use the bidirectional LSTM (Bi-LSTM) (Hochreiter and Schmidhuber, 1997) with self-attention to fuse the inter-character information of a word.",
"self-attention is shown in Figure",
"1(b).",
"Given a word w consisting of n characters c 1 , , c n , we use e 1 , , e n denote is the character representations, which are the output of the CNN rather than randomly initialized.",
"h Fi = LSTM( h Fi 1 , e i ) , (1) h Bi = LSTM( h Bi +1 , e i ) , (2) h i = [ h Fi ; h Bi ] , (3) H = [ h 1 , h 2 , ..., h n ] , (4)",
"where h i is the hidden state of the i -th character in w .",
"Then we use self-attention to obtain the inter-character compositionality.",
"Following the self-attention proposed by (Lin et al., 2017), we compute an attention vector : = softmax( v tanh( U h T i )) , (5) where v and U are learnable weight parameters.",
"Since the Bi-LSTM's hidden state of each character is different according to its contexts, we believe the hidden state can capture both the compositional and non-compositional relations of the characters within a word.",
"After obtaining the word representation, Skip-Gram (Mikolov et al., 2013a) is used to learn the word embedding with its context information.",
"Skip-Gram is a useful framework for learning word vectors, which aims to predict context words given a target word in a sentence.",
"Given a pair of words ( w, c ) , we denote p ( c | w ) as the probability that the word c is observed in the context of the target word w .",
"With the negative-sampling approach, skip-gram formulates the probability p ( c | w ) as follows: Given a pair of words ( w, c ) , the probability that the word c is observed in the context of the target word w is given by p ( D = 1 | w, c ) = ( w T c ) , (7) where w and c are embedding vectors of w and c respectively, is the sigmoid function.",
"The probability of not observing word c in the context of w is given by: p ( D = 0 | w, c ) = 1 ( w T c ) .",
"Given the target word w , its context word c and k negative words c 1 , ..., c k .",
"The word w is a word selected from a sentence in the corpus, and the context c is a nearby word within a window size l .",
"The negative sample c i is a word that is randomly sampled at a certain frequency in the vocabulary.",
"The loss function of VCWE model is as follows: L = L 1 + L 2 , (9) L 1 = log ( w T c )+ k (cid:88) i =1 log ( w T c i ) , (10) L 2 = log ( w T m c )+ k (cid:88) i =1 log ( w T m i ) , (11) where w is the lookup embedding of target word; c and c i are the lookup embeddings of the context and negative words respectively; m c and m i are visual enhanced word embeddings of the context and negative words respectively.",
"Here, we use the visually enhanced word embedding as the representation of context word instead of the target word.",
"The final embedding of the target word is indirectly affected by the visual information.",
"Thus, the final word embedding can have an advantage of fully utilizing intra-character compositionality from CNN, inter-character compositionality from LSTM, and context information from Skip-gram.",
"We use a word sampling scheme similar to the implementation in word2vec (Mikolov et al., 2013a,b) to balance the importance of frequent words and rare words.",
"Frequent words such as (of), (is), (this) are not as meaningful as relatively less frequent words such as (cat), (like), (fruit).",
"To improve the performance of word embeddings, we use sub-sampling(Mikolov et al., 2013b) to discard the word w with the probability of P ( w ) = 1 (cid:113) t f ( w ) when generating the batch, where f ( w ) is the frequency of word w and t is a chosen threshold, typically around 10 5 .",
"To generate negative context words, we sample each word w according to distribution P ( w ) U ( w ) 34 , where U ( w ) is the unigram distribution, which is the frequency of single words appearing in the corpus.",
"This method also plays a role in reducing the frequency of occurrence of high-frequency words.",
"We download Chinese Wikipedia dump 4 on May 20, 2018, which consists of 278K Chinese Wikipedia articles.",
"We use the WikiExtractor toolkit 5 to convert data from XML into text format.",
"We find that the corpus consists of both simplified and traditional Chinese characters.",
"Hence we utilize the opencc toolkit 6 to normalize all characters as simplified Chinese.",
"We remove non-Chinese characters such as punctuation marks by retaining the characters whose Unicode falls into the range between 0x4E00 and 0x9FA5.",
"We use THULAC 7 (Sun et al., 2016b) for word segmentation.",
"We discard words that appeared less than 100 times and obtain a vocabulary of size 66,856.",
"We count the frequency of occurrence of each word to prepare for the subsampling work.",
"In all 66,856 words, we extract 5030 unique characters.",
"We use a Chinese character image generation software to generate the images of these Chinese characters.",
"We subtract a mean image from each input image to center it before feeding it into the CNN.",
"The pre-processed Chinese character images are shown in Figure 2. Figure 2: The pre-processed Chinese character images.",
"4 https://dumps.wikimedia.org/zhwiki/20180520/ 5 https://github.com/attardi/wikiextractor/blob/master/Wiki Extractor.py 6 https://github.com/BYVoid/OpenCC 7 https://github.com/thunlp/THULAC-Python 5.2 Hyperparameters Models used for evaluation have dimension D = 100 and use context window l = 5 unless stated otherwise.",
"We use the threshold t = 10 5 for subsampling, which is the recommended value for word2vec Skip-gram (Mikolov et al., 2013a) on large datasets.",
"The number of negative samples per word is 5.",
"We use mini-batch asynchronous gradient descent with Adam (Kingma and Ba, 2014).",
"The initial learning rate is 0.001.",
"word2vec 8 (Mikolov et al., 2013a) is arguably the most popular word embedding, which uses continuous-bag-of-words (CBOW) and Skip-gram models.",
"We train word2vec with both Skip-gram and CBOW models.",
"We did not train Glove(Pennington et al., 2014) because it did not perform well in many previous Chinese word embedding papers.",
"CWE 9 (Chen et al., 2015) is character-enhanced word embeddings which introduce internal character information into word embedding methods to alleviate excessive re-liance on the external information.",
"GWE 10 (Su and Lee, 2017) is a pixel-based Chinese word embedding model, which exploits character features from font images by convolutional autoencoders.",
"JWE 11 (Yu et al., 2017) is a model to jointly learn the embeddings of Chinese words, characters, and sub character components.",
"For a fair comparison between different algorithms, we use the same corpus and the same hyperparameters mentioned in previous subsections.",
"We evaluate our embeddings on the Chinese word similarity datasets wordsim-240 and wordsim-296 provided by (Chen et al., 2015).",
"Besides, we translate two English word similarity datasets MC-30 8 https://code.google.com/archive/p/word2vec/ 9 https://github.com/Leonard-Xu/CWE 10 https://github.com/ray1007/gwe 11 https://github.com/hkust-knowcomp/jwe Model WS-240 WS-296 MC-30 RG-65 avg Skip-gram 50.23 56.94 69.66 59.86 59.17 CBOW 51.49 61.01 68.97 63.85 61.33 +2.16 CWE 52.63 58.98 68.82 59.60 60.01 +0.84 GWE 52.74 58.22 68.23 60.74 59.98 +0.81 JWE 51.92 59.84 70.27 62.83 61.22 +2.05 VCWE 57.81 61.29 72.77 70.62 65.62 +6.45 -CNN 55.82 59.60 66.87 68.53 62.71 +3.54 -LSTM 58.13 60.85 68.03 69.78 64.20 +5.03 Table 1: Spearman correlation for word similarity datasets, -CNN represents replacing the CNN and image information with randomly initialized character embedding, -LSTM represents replacing Bi-LSTM network and self-attention with the averaging operation.",
"(Miller and Charles, 1991) and RG-65 (Ruben-stein and Goodenough, 1965) to Chinese 12 .",
"Each dataset contains a list of word pairs with a human score of how related or similar the two words are.",
"We calculate the Spearman correlation (Spear-man, 1904) between the labels and our scores generated by the embeddings.",
"The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.",
"The evaluation results of our model and baseline methods on word similarity datasets are shown in Table 1. From the results, we can see that VCWE outperforms other baseline models.",
"The effect of CBOW is much better than Skip-gram.",
"The impact of GWE and CWE are relatively close.",
"The JWE model works better than other benchmark models.",
"In the VCWE model, when we remove the CNN and the image information, the result falls by 2.91.",
"When we replace Bi-LSTM network and self-attention with the averaging operation, the re-12 https://github.com/FudanNLP/VCWE sult drops by 1.42.",
"In the last subsection, we will qualitatively analyze the results of word similarity for different models.",
"To evaluate the quality of our vectors regarding semantics, we use datasets 13 collected by (Peng et al., 2018), which contain Chinese reviews in four domains: notebook, car, camera, and phone.",
"They manually labeled the sentiment polarity towards each aspect target as either positive or negative.",
"It is a binary classification task.",
"Similar to how we process the training data, we remove non-Chinese characters and use THULAC for performing Chinese word segmentation.",
"We build classifiers with the bidirectional LSTM (Hochre-iter and Schmidhuber, 1997) network with self-attention (Lin et al., 2017).",
"We use the standard training/dev/test split and report accuracy using different embeddings generated by different meth-13 http://sentic.net/chinese-review-datasets.zip ods in Table 2. As shown in Table 2, Skip-gram performs well on the combination of the four groups, but it does not perform well in the works of a particular group.",
"JWE outstrips other baseline methods by around 1.1 points.",
"The VCWE model has achieved outstanding results in the car, camera and phone category, with an accuracy rate of at least 3 points higher than other models, indicating that this method of training word embeddings with visual character-level features can achieve better results on downstream tasks.",
"We evaluate our model on the named entity recognition task.",
"We use an open source Chinese NER model to test our word embeddings on MSRA dataset 14 .",
"MSRA is a dataset for simplified Chinese NER.",
"It comes from SIGHAN 2006 shared task for Chinese NER (Levow, 2006).",
"We pretrain word embeddings from different models and feed them into the input layer as features.",
"The key to the task is to extract named entities and their associated types.",
"Better word embeddings could get a higher F1 score of NER.",
"The results in Table 3 show that our model also outperforms baseline models in this task.",
"The performance of CWE and GWE models are similar, both slightly lower than Skip-gram and CBOW models.",
"The F1 score of the JWE model exceeds that of other baseline models and is similar to our model.",
"When removing the CNN and image information, our LSTM with the self-attention model can also achieve the best results on this task, indicating that the learned inter-character composition is practical.",
"The evaluation is performed on the PKU's Peo-ple's Daily 15 (PPD) (Yu et al., 2001) with the standard training/dev/test split.",
"The model is trained with the bidirectional LSTM model using the same hyper-parameters.",
"Results on the POS accuracy on the test set are reported in Table 3. The gap between the usage of different embeddings is not significant, and our model has achieved the best results with a slight advantage.",
"Explicitly, we present the top 10 words that are most similar to our target word.",
"The similar words are retrieved based on the cosine similarity calculated using the learned embeddings.",
"The first example word we consider is (Tang poetry).",
"It refers to poetry written in or around the time of or in the characteristic style of China's Tang Dynasty.",
"All the top-ranked words identified by GWE contain the character (Tang) and (poetry), but in addition to the Tang Dynasty, (Tang) also has other meanings such as surnames.",
"GWE yields several words such as (Don Juan), (Tang Yin), (Monk Tang) and (Tang Ku), which do not appear to be semantically close to the target word.",
"In Skip-gram and JWE, certain words such as (anonymity) and (ancient and modern) do not appear to be semantically very closely related to the target word.",
"In our VCWE model, all the top-ranked words are semantically related to the target word, including the genre of poetry, poets of the Tang Dynasty, and so on.",
"We choose the (sofa) as the second target word.",
"Like the first two words, GWE only pays attention to the character (sand).",
"Skip-gram and JWE have some irrelevant words such as (telephone box) and (billboard).",
"VCWE pays more attention to Targets Skip-gram GWE JWE VCWE (Tang poetry) (Qu-Poetry) (Song poetry) (notes on poetry) (notes on poetry) (music score) (indite) (ancient and modern) (Song poetry) (anonymity) (rhyme) (anonymity) (jueju) (Song Ci Poetry) (Chinese poetry) (Yuefu) (Song Ci Poetry) (Bai Juyi) (Don Juan) (music score) (chant) (jueju) (recite poems) (compile) (Yuefu) (record) (Tang Yin) (carving copy) (seven-character) (Chu Songs) (Monk Tang) (be handed down) (Li Shangyin) (Yuefu) (Tang Ku) (ancient poetry) (ancient poetry) (compile) (poetry) (Qu-Poetry) (poetic prose) (sofa) (bureau) (hourglass) (desk) (wardrobe) (bedroom) (sand) (wardrobe) (bedroom) (chair) (sandbag) (washcloth) (bathtub) (upstairs) (sandbox) (secretaire) (living room) (living room) (raucity) (quilt) (curtain) (bathtub) (sandspit) (bench) (chair) (downstairs) (satay) (curtain) (fireplace) (raincoat) (sandbag) (bathtub) (door) (bloodstain) (Saori) (door) (bench) (telephone box) (nereid) (billboard) (desk) Table 4: Case study for qualitative analysis.",
"Limited to the width of the table, we do not show the results of CWE model.",
"The results of the GWE model are not much different from the CWE model, indicating that the image features obtained by pre-training of GWE may not play a decisive role.",
"However, our model does not pre-train image information, but jointly trains and dynamically updates image feature information and it works better.",
"JWE model is similar to Skip-gram model in that they pay more attention to contextual information, but sometimes the model gets some irrelevant words.",
"Unlike phonograms, logograms have word and phrase meanings singularly.",
"The images of Chinese characters contain rich semantic information.",
"Since logographic languages are more closely associated with images than alphabet languages, it makes sense to mine the characteristics of these images.",
"Liu et al. (2017) provide a new way to automatically extract character-level features, creating an image for the character and running it through a convolutional neural network to produce a visual character embedding.",
"However, this method does not utilize the rich semantic information of contextual words.",
"Our model extracts both image features and contextual semantic information.",
"Su and Lee (2017) introduce a pixel-based model that learns character features from font images.",
"However, they use convolutional auto-encoder(convAE) to extract image features in advance, and then add these features to the CWE (Chen et al., 2015) model.",
"In the end, the effect of the model is not much different from CWE.",
"Our model is an end-to-end model.",
"We update the im-age's feature parameters in real time during training, and our model achieves better results than the GWE model.",
"Our research focuses on simplified Chinese word embeddings, and the idea can also be applied to other languages that share a similar writing system, such as traditional Chinese, Japanese, and so on.",
"In this paper, we proposed a pixel-based model to learn Chinese word embeddings with character embeddings that are compositional in the components of the characters.",
"We utilized the visual features of Chinese characters to enhance the word embedding.",
"We showed that our model outperforms the baseline model in the word similarity, sentiment analysis, named entity recognition and part-of-speech tagging tasks.",
"end-to-end and make full use of the contextual information.",
"In the future, we hope to apply our model to other downstream tasks and other logographic writing systems.",
"We would like to thank the anonymous reviewers for their valuable comments.",
"The research work is supported by Shanghai Municipal Science and Technology Commission (No. 17JC1404100 and 16JC1420401), National Key Research and Development Program of China (No. 2017YFB1002104), and National Natural Science Foundation of China (No. 61672162 and 61751201)."
] | [
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"objective",
"method",
"result",
"abstain",
"method",
"other",
"other"
] |
[
"Currently, Medical Subject Headings (MeSH) are manually assigned to every biomedical article published and subsequently recorded in the PubMed database to facilitate retrieving relevant information.",
"With the rapid growth of the PubMed database, large-scale biomedical document indexing becomes increasingly important.",
"MeSH indexing is a challenging task for machine learning, as it needs to assign multiple labels to each article from an extremely large hierachically organized collection.",
"To address this challenge, we propose KenMeSH, an end-to-end model that combines new text features and a dynamic K nowledgeen hanced mask attention that integrates document features with MeSH label hierarchy and journal correlation features to index MeSH terms.",
"Experimental results show the proposed method achieves state-of-the-art performance on a number of measures.",
"The PubMed 1 database is a resource that provides access to the MEDLINE bibliographic database of references and abstracts together with the full text articles of some of these citations which are available in the PubMed Central 2 (PMC) repository.",
"MEDLINE 3 contains more than 28 million references (as of Feb. 2021) to journal articles in the biomedical, health, and related disciplines.",
"Journal articles in MEDLINE are indexed according to Me dical S ubject H eadings (MeSH) 4 , an hierarchically organized vocabulary that has been developed and maintained by the National Library of Medicine (NLM) 5 .",
"Currently, there are 29,369 main MeSH headings, and each MEDLINE citation 1 https://pubmed.ncbi.nlm.nih.gov/about/ 2 https://en.wikipedia.org/wiki/PubMed_Central 3 https://www.nlm.nih.gov/medline/medline_overview.",
"html 4 https://www.nlm.nih.gov/mesh/meshhome.html 5 https://www.nlm.nih.gov has 13 MeSH indices, on average.",
"MeSH terms are distinctive features of MEDLINE and can be used in many applications in biomedical text mining and information retrieval (Lu et al., 2008; Huang et al., 2011; Gu et al., 2013), being recognized as important tools for research (e.g., knowledge discovery and hypothesis generation).",
"Currently, MeSH indexing is done by human annotators who examine full articles and assign MeSH terms to each article according to rules set by NLM 6 .",
"Human annotation is time consuming and costly the average cost of annotating one article in MEDLINE is about $9.40 (Mork et al., 2013).",
"Nearly 1 million citations were added to MEDLINE in 2020 (approximately 2,600 on a daily basis) 7 .",
"The rate of articles being added to the MEDLINE database is constantly increasing, so there is a huge financial and time-consuming cost for the status quo .",
"Therefore, it is imperative to develop an automatic annotation system that can assist MeSH indexing of large-scale biomedical articles efficiently and accurately.",
"Automatic MeSH indexing can be regarded as an extreme multi-label text classification (XMC) problem, where each article can be labeled with multiple MeSH terms.",
"Compared with standard multi-label problems, XMC finds relevant labels from an enormous set of candidate labels.",
"The challenge of large-scale MeSH indexing comes from both the label and article sides.",
"Currently, there are more than 29,000 distinct MeSH terms, and new MeSH terms are updated to the vocabulary every year.",
"The frequency of different MeSH terms appearing in documents are quite imbalanced.",
"For instance, the most frequent MeSH term, humans', appears in more than 8 million citations; Pandanaceae', on the other hand, appears in only 31 documents (Zhai 6 https://www.nlm.nih.gov/bsd/indexing/training/TIP_ 010.html 7 https : / / www. nlm . nih . gov / bsd / medline _ pubmed _ production_stats.html 2941 et al., 2015).",
"In addition, the MeSH terms that have been assigned to each article varies greatly, ranging from more than 30 to fewer than 5. Furthermore, semantic features of the biomedical literature are complicated to capture, as they contain many domain-specific concepts, phrases, and abbreviations.",
"The aforementioned difficulties make the task more complicated to generate an effective and efficient prediction model for MeSH indexing.",
"In this work, inspired by the rapid development of deep learning, we propose a novel neural architecture called KenMeSH ( K nowledgeen hanced MeSH labelling) which is suitable for handling XMC problems where the labels are arrayed hierarchically and could capture useful information as a directed graph.",
"Our method uses a dynamic knowledge-enhanced mask attention mechanism and incorporates document features together with label features to index biomedical articles.",
"Our major contributions are: 1. We design a multi-channel document representation module to extract document features from the title and the abstract using a bidirectional LSTM.",
"We use multi-level dilated convolution to capture semantic units in the abstract channel.",
"This module combines a hybrid of information, at the levels of words and the latent representations of the semantic units, to capture local correlations and long-term dependencies from text.",
"2. Our proposed method appears to be the first to employ graph convolutional neural networks that integrate information from the complete MeSH hierarchy to map label representations.",
"3. We propose a novel dynamic knowledge-enhanced mask attention mechanism which incorporates external journal-MeSH co-occurrence information and document similarity in the PubMed database to constrain the large universe of possible labels in the MeSH indexing task.",
"4. We evaluate our model on a corpus of PMC articles.",
"Our proposed method consistently achieves superior performance over previous approaches on a number of measures.",
"To address the MeSH indexing task mentioned in above section, the National Library of Medicine",
"developed Medical Text Indexer (MTI) software that automatically recommends MeSH terms to each MEDLINE article using the abstract and title as input (Aronson et al., 2004).",
"It first generates the candidate MeSH terms for given articles, and then ranks the candidates to provide the final predictions.",
"There are two modules in MTI MetaMap Indexing (MMI) and PubMed-Related Citations (PRC) (Lin and Wilbur, 2007; Aronson and Lang, 2010).",
"MetaMap is NLM-developed software which extracts the biomedical concepts in the documents and maps them to Unified Medical Language System concepts.",
"MMI recommends MeSH terms using the biomedical concepts discovered by MetaMap.",
"PRC uses k -nearest neighbours to find the MeSH annotations of similar citations in MEDLINE.",
"The two mentioned sets of MeSH terms combine the final MeSH recommendations from MTI.",
"BioASQ 8 , an EU-funded project, has organized challenges on automatic MeSH indexing since 2013, which provides opportunities to involve more participants in continuing to the development of MeSH indexing systems.",
"Many effective MeSH indexing systems have been developed since then, such as MeSHLabeler (Liu et al., 2015), DeepMeSH (Peng et al., 2016), AttentionMeSH (Jin et al., 2018), and MeSHProbeNet (Xun et al., 2019).",
"MeSHLabeler introduced a Learning-to-Rank (LTR) framework, which is a two-step strategy, first predicting the candidate MeSH terms and then ranking them to obtain the final suggestions.",
"MeSHLabeler first trained an independent binary classifier for each MeSH term and then used various evidence, including similar publications and term frequencies, to rank candidate MeSH terms.",
"DeepMeSH is an improved version of MeSHLabeler, which also uses the LTR strategy.",
"It first generates MeSH predictions by incorporating deep semantics in the word embedding space, and then ranks the candidates.",
"AttentionMeSH and MeSHProbeNet are based on bidirectional recurrent neural networks (RNNs) and attention mechanisms.",
"The main difference between AttentionMeSH and MeSHProbeNet is that the former uses a label-wise attention mechanism while the latter develops self-attentive MeSH probes to extract comprehensive aspects of information from the input articles.",
"ac-8 http://bioasq.org",
"cess.",
"Jimeno-Yepes et al. (2013) randomly selected 1413 articles from the PMC Open Access Subset and used automatically-generated summaries from these full texts as input to MTI for MeSH indexing.",
"Demner-Fushman and Mork (2015) collected 14,828 full text articles from PMC Open Access Subset and developed a rule-based string-matching algorithm to extract a subject of MeSH terms called check tags' that are used to describe the characteristics of the subjects.",
"Wang and Mercer (2019) randomly selected 257,590 full text articles from PMC Open Access Subset and developed a multichannel model using CNN-based feature selection to extract important information from different sections of the articles.",
"HGCN4MeSH (Yu et al., 2020) used the PMC dataset generated by Wang and Mercer (2019) and employed graph convolutional neural network to learn the co-occurrences between MeSH terms.",
"FullMeSH (Dai et al., 2019) and BERTMeSH (You et al., 2020) used all available full text articles in PMC Open Access Subset.",
"FullMeSH applied an attention-based CNN to predict the MeSH terms and LTR to get the final MeSH candidates; BERTMeSH incorporated pre-trained BERT and an attention mechanism to improve the performance of MeSH indexing.",
"Graph convolutional neural networks (GCN)s (Kipf and Welling, 2017) have received considerable attention and achieved remarkable success in natural language processing recently.",
"Some text classification systems introduce GCN by formulating their problems as graph-structural tasks.",
"For instance, TextGCN (Yao et al., 2019) built a single text graph for a corpus based on word co-occurrence and document word relations to infer labels.",
"Zhang et al. (2019a) built a GCN-based dependency tree of a sentence to exploit syntactical information and word dependencies for sentiment analysis.",
"Other research focused on learning the relationships between nodes in a graph, such as the label co-occurrences for multi-label text classifica-tions; e.g., MAGNET (Pal et al., 2020) built a label graph to capture dependency structures among labels, and Rios and Kavuluru (2018) built a multi-label classifier that was learned from a 2-layer GCN over the label hierarchy.",
"representations that could be utilized for specific tasks.",
"For instance, Pujary et al. (2020) used GCN to learn an undirected graph derived from disease names in the MeSH taxonomy in order to detect and normalize disease mentions in biomedical texts.",
"MeSH indexing can be regarded as a multi-label text classification problem in which, given a set of biomedical documents X = { x 1 , x 2 , ..., x n } and a set of MeSH labels Y = { y 1 , y 2 , ..., y L } , multi-label classification learns the function f : X [0 , 1] Y using the training set D = ( x i , Y i ) , i = 1 , ..., n , where n is the number of documents in the set.",
"Figure 1 illustrates our overall architecture.",
"Our model is composed of a multi-channel document representation module, a label features learning module, a dynamic semantic mask attention module, and a classifier.",
"The multi-channel document representation module has two input channels the title channel and the abstract channel, for each type of text.",
"These two texts are represented by two embedding matrices, namely E title R d , the word embedding matrix for the title, and E abstract R d , the word embedding matrix for the abstract.",
"We first apply a bidirectional Long Short-Term Memory (biLSTM) network (Hochreiter and Schmidhuber, 1997) in both channels to encode the two types of text and to generate the hidden representations h t for each word at time step t .",
"The computations of h t and h t are illustrated below: h t = LST M ( x t , h t 1 , c t 1 ) h t = LST M ( x t , h t 1 , c t 1 ) (1) We then obtain the final representation for each word by concatenating the hidden states from both directions, namely h t = [ h t : h t ] and h t R l 2 d h , where l is the number of words in the text and d h is the hidden dimensions.",
"The biLSTM returns context-aware representations H title and H abstract for the title and abstract channels, respectively: H title = biLST M ( E title ) H abstract = biLST M ( E abstract ) (2) 2943 Figure 1: Model Architecture There are three main components in our method.",
"In order to generate high-level semantic representations of abstracts, we introduce a dilated convolutional neural network (DCNN) to the abstract channel.",
"The concept of dilated convolution was originally developed for wavelet decomposition (Holschneider et al., 1990), and has been applied to NLP tasks such as neural machine translation (Kalchbrenner et al., 2017) and text classification (Lin et al., 2018).",
"The main idea of DCNN is to insert holes' in convolutional kernels, which extract the longer-term dependencies and generate higher-level representations, such as phases and sentences.",
"Following Lin et al. (2018), we apply a multi-level DCNN with different dilation rates on top of the hidden representations generated by the biLSTM on the abstract channel.",
"Small dilation rates capture phrase-level information, and large ones capture sentence-level information.",
"The DCNN returns the semantic features of the abstract channel D abstract R ( l s +1) 2 d h , where s is the width of the convolution kernels.",
"MeSH taxonomies are organized in 16 categories, and each is further divided into subcategories.",
"Within each subcategory, MeSH terms are ordered hierarchically from most general to most specific, up to 13 hierarchical levels.",
"As the MeSH hierarchy is important to our task, we use a two-layer GCN to incorporate the hierarchical parent and child information among labels.",
"We first use the MeSH descriptors to generate a label feature vector for each MeSH term.",
"Each label vector is calculated by averaging the word embedding of each word in its descriptors: v i = 1 N (cid:88) j N w j , i = 1 , 2 , ..., L, (3) where v i R d , N is the number of words in its descriptor, and L is the number of labels.",
"In the graph structure, we formulate each node as a MeSH label, and edges represent relationships in the MeSH hierarchy.",
"The edge types of a node include edges from its parent, from its children, and from itself.",
"At each GCN layer, the node feature is aggregated by its parent and children to form the new label feature for the next layer: h l +1 = ( A h l W l ) , (4) where h l and h l +1 RL d indicate the node presentation of the l th and ( l + 1) th layers, ( ) denotes an activation function, A is the adjacency matrix of the MeSH hierarchical graph, and W l is 2944 a layer-specific trainable weight matrix.",
"We then concatenate the label feature vectors from descriptors in Equation 3 with GCN label vectors to form: H label = [ v : h l +1 ] , (5) where H label RL 2 d is the final label vector.",
"In the dynamic knowledge-enhanced mask attention module, we integrate external knowledge from outside sources to generate a unique mask for each article dynamically.",
"We consider only a subset of the full MeSH list by employing a masked label-wise attention that computes the element-wise multiplication of a mask matrix and an attention matrix for two reasons.",
"First, the MeSH terms are numerous and have widely varying occurrence frequencies.",
"Therefore, for each MeSH label, there are far more negative examples than positive ones.",
"For each article, selecting a subset of MeSH labels, namely a MeSH mask, down-samples the negative examples, which forces the classifier to concentrate on the candidate labels.",
"Second, the issue with the original attention mechanism (Bahdanau et al., 2015) is that the classifier focuses on spotting relevant information for all predicted labels, which is a lack of pertinence.",
"Using a masked label-wise attention allows the classifier to find relevant information for each label inside the MeSH mask.",
"The dynamic ensures that the module generates a unique MeSH mask for each article, specifically.",
"To generate the MeSH masks, we consider two external knowledge sources: journal information and document similarity.",
"The journal information refers to the name of the journal in which an article was published, which usually defines a specific research domain.",
"We expect that articles published in the same journal tend to be indexed with MeSH terms that are relevant to the journal's research focus.",
"We build a journalMeSH label co-occurrence matrix using conditional probabilities, i.e., P ( L i | J j ) , which denote the probabilities of occurrence of label L i when journal J j appears.",
"where CL i J j denotes the number of co-occurrences of L i and J j , and CJ j is the number of occurrences of J j in the training set.",
"To avoid the noise of rare co-occurrences, a threshold filters noisy correlations.",
"We then use k -nearest neighbors (KNN) to choose a subset of specific MeSH terms for each article by referring to document similarity.",
"We represent each article by the IDF-weighted sum of word embeddings in the abstract: D idf = (cid:80) ni =1 IDF i e i (cid:80) ni =1 IDF i , (8) where e i is the word embedding, and IDF i is the inverse document frequency of the word.",
"Next, we use KNN based on cosine similarity between abstracts to find the K nearest neighbours for each article in the training set.",
"To form the unique MeSH mask for article a , we collect MeSH terms M a from the neighbours of a : M a = T 1 T 2 ... TK , (9) where T i is the MeSH label set from the i th neighbour of article a .",
"We then join the MeSH labels generated from journalMeSH co-occurrence for the journal that article a has been published in together with the MeSH terms obtained from the neighbours of article a to form the final MeSH mask label set M : M = M j M a (10) Then we assign a value to each label in Y to form M vec [0 , 1] Y .",
"If the label appears in M , we assign 1, 0 otherwise.",
"The label order of M vec is the same as H label .",
"We calculate the similarity between MeSH terms and the texts in two channels by applying masked label-wise attention.",
"H masked = H label (cid:12) M vec title = Softmax ( H title H masked ) abstract = Softmax ( D abstract H masked ) , (11) where (cid:12) denotes element-wise multiplication, H masked denotes the masked label features, and title and abstract measure how informative each text fragment is for each label in the title and abstract channels, respectively.",
"We then generate the label-specific title and abstract representations, respectively: c title = T title H title c abstract = T abstract D abstract , (12) 2945 Method Micro-average Measure Example Based Measure MiF MiP MiR EBF EBP EBR MTI 0.390 0.379 0.402 0.393 0.378 0.408 HGCN4MeSH 0.524 0.763 0.399 0.529 0.762 0.405 DeepMeSH 0.639 0.669 0.612 0.631 0.667 0.627 BERTMeSH 0.667 0.696 0.640 0.657 0.700 0.650 FullMeSH (Full) 0.651 0.683 0.623 0.643 0.680 0.639 BERTMeSH (Full) 0.685 0.713 0.659 0.675 0.717 0.667 KenMeSH 0.745 0.864 0.655 0.738 0.863 0.644 0.021 0.011 0.027 0.018 0.011 0.022 Table 1: Comparison to previous methods across two main evaluation metrics.",
"such that c title RL 2 d , and c abstract RL 2 d .",
"We sum up the representations in the title and abstract channels to form the document vector for each article: D = c title + c abstract (13) 3.4 Classifier We gain scores for each MeSH term i : y i = ( D (cid:12) H label ) , i = 1 , 2 , ..., L, (14) where ( ) represents the sigmoid function.",
"We train our model using the multi-label binary cross-entropy loss (Nam et al., 2014): L = L (cid:88) i =1 [ y i log ( y i ) (1 y i ) log (1 y i ))] , (15) where y i [0 , 1] is the ground truth of label i , and y i [0 , 1] denotes the prediction of label i obtained from the proposed model.",
"We follow Dai et al. (2019) and You et al. (2020) by using the PMC FTP service 9 (Comeau et al., 2019) and downloading PMC Open Access Subset (as of Sep. 2021), totalling 3,601,092 citations.",
"We also download the entire MEDLINE collection based on the PubMed Annual Baseline Repository (as of Dec. 2020) and obtain 31,850,051 citations with titles and abstracts.",
"In order to reduce bias, we only focus on articles that are annotated by human curators (not annotated by a curated' or auto' modes in MEDLINE).",
"We then match PMC articles with the citations in PubMed to PMID and obtain a set of 1,284,308 citations.",
"Out of these PMC articles, we use the latest 20,000 articles as the test set, the next latest 200,000 articles as the validation data set, and the remaining 1.24M articles as the training set.",
"In total, 28,415 distinct MeSH terms are covered in the training dataset.",
"We implement our model in PyTorch (Paszke et al., 2019).",
"For pre-processing, we removed non-alphanumeric characters, stop words, punctuation, and single character words, and we converted all words to lowercase.",
"Titles longer than 100 characters and abstracts longer than 400 characters are truncated.",
"We use pre-trained biomedical word embeddings (BioWordVec) (Zhang et al., 2019b), and the embedding dimension is 200.",
"To avoid overfit-ting, we use dropout directly after the embedding layer with a rate of 0 .",
"2 .",
"The number of units in hidden layers are 200 in all three modules.",
"We use a three-level dilated convolution with dilation rate [1 , 2 , 3] and select 1000 nearest documents to generate MeSH masks for each article.",
"We use FAISS (Johnson et al., 2019) to find similar documents for each citation among the training set, and the whole process takes 10 hours.",
"We use Adam optimizer (Kingma and Ba, 2015) and early stopping strategies.",
"The learning rate is initialized to 0 .",
"0003 , and the decay rate is 0 .",
"9 in every epoch.",
"The gradient clip is applied to the maximum norm of 5. The batch size is 32.",
"The model trained for 50 hours on a single NVIDIA V100 GPU.",
"The detailed hyper-parameter settings are shown in Table 3. The code for our method is available at https://github.com/xdwang0726/KenMeSH.",
"We use three main evaluation metrics to test the performance of MeSH indexing systems: Micro-average measure (MiM), example-based measure (EBM), and ranking-based measure (RBM), where MiM and EBM are commonly used in MeSH indexing tasks and RBM is commonly used in evaluating multi-label classification.",
"Micro-average F-measure (MiF) aggregate the global contributions of all MeSH labels and then calculate the harmonic mean of micro-average precision (MiP) and micro-average recall (MiR), which are heavily influenced by frequent MeSH terms.",
"Example-based measures are computed per data point, which computes the harmonic mean of standard precision (EBP) and recall (EBR) for each data point.",
"In the ranking-based measure, precision at k ( P @ k ) shows the number of relevant MeSH terms that are suggested in the topk recommendations of the MeSH indexing system, and recall at k ( R @ k ) indicates the proportion of relevant items that are suggested in the topk recommendations.",
"The detailed computations of evaluation metrics can be found in Appendix A. The threshold has a large influence on MiF and EBF, see Appendix B. We select final MeSH labels whose predicted probability is larger than a tuned threshold t i : MeSH i = (cid:40) y i t i , 1 y i < t i , 0 (16) where t i is the threshold for MeSH term i .",
"We compute optimal threshold for each MeSH term on the validation set following Pillai et al. (2013) that tunes t i by maximizing MiF: t i = argmax T MiF ( T ) , (17) where T denotes all possible threshold values for label i .",
"We evaluate our proposed model with five state-of-the-art models: MTI, DeepMeSH, FullMeSH, BERTMeSH and HGCN4MeSH.",
"Among these, MTI, DeepMeSH, BERTMeSH, and HGCN4MeSH are trained with abstracts and titles only; FullMeSH (Full) and BERTMeSH (Full) are trained with full PMC articles.",
"Our proposed model is trained on titles and abstracts, and is tested using 20,000 of the latest articles.",
"We mainly focus on MiF, which is the main evaluation metric in MeSH indexing task.",
"We compare our model against previous related systems on micro-average measure and example-bases measure in Table 1. Each row in the table shows all evaluation metrics on a specific method, where the best score for each metric is indicated.",
"As reported, our model achieves the best performance on most evaluation metrics, expect MiR and EBR, on which BERTMeSH (Full) achieves the best performance.",
"This is because that BERTMeSH (Full) is trained on full text articles, which uses much more content information in the articles than ours.",
"Our model outperforms the subset of systems that were trained only on the abstract and the title MTI, HGCN4MeSH, DeepMeSH and BERTMeSH in all metrics.",
"Most importantly, there is improvement in precision without a decrease in recall.",
"Comparing with systems trained on full articles indicates that our model achieves the best MiF, and is only slightly below BERTMeSH (Full) on MiR (0.4 percentage points).",
"Although our model is trained only on the abstract and title (which may suggest that it captures less complex semantics), it performs very well against more complex systems.",
"Furthermore, we compare the performance of our model with HGCN4MeSH on ranking-based measures that do not require a specific threshold.",
"The results, summarized in Table 2, show that our model always performs better than HGCN4MeSH with up to almost 18% improvement.",
"As the frequency of different MeSH terms are imbalanced, we are interested in examining the effi-ciency of our model on infrequent MeSH terms.",
"We divide MeSH terms into four groups based on the number of occurrences in the training set: (0 , 100) , [100 , 1000) , [1000 , 5000) , and [5000 , ) .",
"Figure 2a shows the distribution of MeSH terms and percent of occurrence among the four divided groups in the training set, which indicates that the distribution of MeSH frequency is highly biased and it 2947",
"falls into a long-tail distribution.",
"Figure 2b and 2c show the performance of our model comparing to MTI baseline in the four MeSH groups on MiF and EBF respectively.",
"Our model obtains substantial improvements among frequent and infrequent labels on both MiF and EBF.",
"We are interested in studying how the effectiveness and robustness of our model are due to the various modules, such as the multi-channel mechanism, the dilated CNN, the label graph, and masked attention.",
"To further understand the impacts of these factors, we conduct controlled experiments with four different settings:",
"(a) examining a single channel architecture by concatenating the title and abstract as input into the abstract channel;",
"(b) removing the dilated CNN;",
"(c) replacing the label feature learning module with a fully connected layer; and",
"(d) removing the masked attention module.",
"The influence of each of these modules can then be evaluated individually.",
"The results are summarized in Table 4. Impacts on Multi-channel Settings As shown in Table 4, the multi-channel setting outperforms the single channel one.",
"The reason for this could be that the single channel model misses some important features in titles and abstracts in the LSTM layer.",
"LSTM has the capability to learn and remember over long sequences of inputs, but it can be challenging to use when facing very long input sequences.",
"Concatenating the title and abstract into one longer sequence may hurt the performance of LSTM.",
"To be more explicit, the single channel model may be remembering insignificant features in the LSTM layer when dealing with longer sequences.",
"Therefore, extracting information from the title and the abstract separately is better than directly concatenating the information.",
"Extractions As reported in Table 4, the performance drops when removing the dilated CNN layer.",
"The reason for this seems to be that multi-level dilated CNNs can extract high-level semantic information from the semantic units that are often wrapped in phrases or sentences, and then capture local correlation together with longer-term dependencies from the text.",
"Compared with word-level information extracted from the biLSTM layer, high-level information extracted from the semantic units seems to provide better understanding of the text, at least for the purposes of labelling.",
"Impacts on Learning Label Features As shown in Table 4, not learning the label features has the largest negative impacts on performance especially for recall (and subsequently F-measure).",
"By removing the label features, the model pays more attention to the frequent MeSH terms and misclas-sifies infrequent labels as negative.",
"This indicates that label features learned through GCN can capture the hierarchical information between MeSH terms, and MeSH indexing for infrequent terms can benefit from this hierarchical information.",
"Impacts on Dynamic Knowledge-enhanced Mask Attention Table 4 shows a performance drop when removing the masked attention layer, suggesting that the attention mechanism has positive impacts on performance.",
"This result further suggest that the masked attention takes advantage of incorporating external knowledge to alleviate the extremely large pool of possible labels.",
"To select the proper mask for each article, two hyperparame-ters are used: threshold for journal-MeSH occurrence and the number of nearest articles K .",
"With = 0 .",
"5 and K = 1000 , all of the gold-standard MeSH labels are guaranteed to be in the mask.",
"We propose a novel end-to-end model integrating document features and label hierarchical features for MeSH indexing.",
"We use a novel dynamic knowledge-enhanced mask attention mechanism to handle the large universe of candidate MeSH terms and employ GCN in extracting label correlations.",
"Experimental results demonstrate that our proposed model significantly outperforms the baseline models and provides especially large improvements on infrequent MeSH labels.",
"In the future, we believe two important research directions will lead to further improvements.",
"First, we plan to explore full text articles, which contain more information, to see whether our model takes advantage of the full text to improve the performance of large-scale MeSH indexing.",
"Second, we are interested in integrating knowledge from the Unified Medical Language System (UMLS) (Bodenreider, 2004), a comprehensive ontology of biomedical concepts, in our model.",
"We thank all reviewers and area chairs for their constructive comments and feedback.",
"Resources used in preparing this research were provided, in part, by Compute Ontario 10 , Compute Canada 11 , the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute 12 .",
"This research is partially funded by The Natural Sciences and Engineering Research Council of Canada (NSERC) through a Discovery Grant to R. E. Mercer.",
"F. Rudzicz is supported by a CIFAR Chair in AI."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"objective",
"objective",
"objective",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"In this paper, we study the effect of commonsense and domain knowledge while generating responses in counseling conversations using retrieval and generative methods for knowledge integration.",
"We propose a pipeline that collects domain knowledge through web mining, and show that retrieval from both domain-specific and commonsense knowledge bases improves the quality of generated responses.",
"We also present a model that incorporates knowledge generated by COMET using soft positional encoding and masked self-attention.",
"We show that both retrieved and COMET-generated knowledge improve the system's performance as measured by automatic metrics and by human evaluation.",
"Lastly, we present a comparative study on the types of knowledge encoded by our system, showing that causal and intentional relationships benefit the generation task more than other types of commonsense relations.",
"Mental health care has been of great importance as the ongoing COVID-19 pandemic poses a serious negative impact on people's mental wellbeing (Paredes et al., 2021).",
"Not only there is a larger unmet need for counseling services, the health care workers are also in tremendous physical and mental strain (Huffman et al., 2021).",
"With this in mind, it is natural to consider how the advancement in natural language processing can be leveraged to help counseling.",
"Across different counseling styles, reflective listening has always been a fundamental procedure underlying effective counseling practices (Katz and McNulty, 1994).",
"Reflective listening asks the counselor not only to listen to the client carefully, but also to actively make a guess of what the client means.",
"If carried out the right way, it gives the client a sense of being understood and facilitates further self-exploration.",
"However, people do not always say what they mean, which is especially the case for patients seeking mental support.",
"Reflection, as the response made based on reflective listening, sometimes needs to decode the client's meaning not explicitly expressed in words.",
"On the other hand, pressing the client to clarify the missing part may hinder them from expressing their own experience (Miller and Rollnick, 2012).",
"Thus, counseling frequently calls for counselors to make inferences based on their prior knowledge.",
"For example, when the client says I had a really hard time sticking to my diet this week , a plausible reflection may be You're wondering whether you'll be able to lose weight this way , which relates diet with losing weight as an inference based on commonsense knowledge.",
"Moreover, making a good reflection may sometime require domain knowledge.",
"For example, to understand the client in Figure 1, the counselor needs to know that smoking can be a possible cause of emphysema, and Chantix is a medication for smoke cessation.",
"All these cases pose challenges to state-of-the-art language models.",
"In this paper, we propose the task of knowledge enhanced counseling reflection generation, which utilizes the dialogue context as well as commonsense and domain knowledge.",
"This extra knowledge is needed since existing pre-trained language models struggle to produce coherent and informative responses that capture relevant knowledge, even if they have acquired some knowledge during the pre-training phase (Petroni et al., 2019a).",
"A system that generates accurate counseling reflections can serve as a tool to aid counseling training or assist counselors during a session by providing alternative reflections in response to client's statements.",
"We experiment with two main strategies to incorporate knowledge.",
"The first is retrieval , which acquires sentences containing relevant knowledge 3096 Figure 1: Sample medical entities extracted from the client's utterance in a counseling session using Amazon Comprehend Medical.",
"based on the vector representations of sentences from the dialogue and assertions in the knowledge base using a BERT-based model (Reimers and Gurevych, 2019a).",
"The second strategy is generative , where we first extract key phrases from the dialogue, and query a COMET model for plausible knowledge triplets with a predefined set of relations (Bosselut et al., 2019).",
"We propose a knowledge-grounded BART (Lewis et al., 2020) model using soft positional encoding and masked self-attention representations to indicate the knowledge position and make the introduced knowledge only visible to the key phrase it relates to.",
"In addition, we explore the effect of different knowledge sources on the counseling responses generation task.",
"Although commonsense knowledge bases usually have high coverage for general domain concepts, they contain a limited amount of domain-specific knowledge.",
"This applies particularly to medical terminology.",
"For instance, when querying ConceptNet (Speer et al., 2017), a wellknown knowledge base, for the word Chantix (a prescription smoking cessation aid) we are only able to retrieve three relationships, including synonyms, related terms, and type-of, whereas with a common word daughter ConceptNet provides a total of eleven relationships.",
"For the Chantix example in Figure 1, ConceptNet is also missing important causal relationships regarding side effects or suggested usage, which are especially relevant during a counseling conversation about smoking cessation.",
"To address this challenge, we collect a dataset of counseling domain knowledge using web mining with queries constructed with the medical concepts extracted from the dialogue as well as manually defined templates.",
"We compare this Web-collected data with a public commonsense knowledge base, and show that this data collected with no human annotation can serve as a complementary knowledge resource.",
"We also conduct an ablation study on different categories of commonsense knowledge, and show that intentional or causal relationships are more useful for counseling response generation, a finding consistent with related medical literature.",
"(Miller and Rollnick, 2012).",
"Contributions.",
"The main contributions of this work are as follows:",
"1) We collect a counseling knowledge base and use it along with commonsense knowledge bases for the task of reflection generation using different retrieval-based methods.",
"2) We adopt the encoding scheme from K-BERT on BART to incorporate knowledge generated from COMET.",
"3) We analyze different types of commonsense and domain knowledge, and their effect on the generation task.",
"Previous research has addressed the task of automating response generation in health care and counseling settings.",
"Greer et al. (2019) used a decision tree to deliver pre-written scripts and guide the user to learn a set of positive emotion skills.",
"V et al. (2019) identified medical entities and the client's intent to fetch an answer for cancer related questions.",
"Almusharraf et al. (2020) classified client's responses to choose which question to ask next for smoking cessation.",
"There are also commercial systems like Woebot (Fitzpatrick et al., 2017) that detect mental health issues mentioned by the user and direct them to relevant information.",
"However, there is a limited amount of work on free-form generation as compared to the template-based approaches described above.",
"Shen et al. (2020) focused on generating counseling reflections with GPT-2 based on the dialogue context and responses retrieved from similar counseling sessions.",
"We address a similar 3097 task but enhance the generation process by infusing commonsense and domain specific knowledge to better emulate what counselors do in practice.",
"To the best of our knowledge, the effect of knowledge in counseling response generation is not yet well studied.",
"Large-scale pretrained language models have been shown to encode some knowledge implicitly through their pretraining objectives (Petroni et al., 2019a), including both commonsense (Shwartz et al., 2020) and factual knowledge (Petroni et al., 2019b).",
"However, pretrained language models still struggle with some downstream applications, especially when the model needs to make inference based on context (Do and Pavlick, 2021; Kassner and Schtze, 2020).",
"Thus, recent works have also explored enhancing pretrained models with external knowledge.",
"Introducing knowledge into language models has been shown to be successful on various downstream tasks and model architecture (Ren et al., 2020; Zhao et al., 2020; Song et al., 2019).",
"For instance, Mao et al. (2019) generates story with multitasking learning on commonsense QA datasets.",
"Zhao et al. (2020) used BERT as a knowledge selection module for dialogue generation.",
"Chakrabarty et al. (2020) ranked knowledge generated from the COMET for sarcasm generation.",
"Ji et al. (2020) do multi-hop with a graph convolutional network on ConceptNet.",
"Similarly, our work uses external knowledge sources to enhance text generation for counseling conversations.",
"External knowledge resources have been found useful for enhancing language models.",
"For example, large-scale commonsense knowledge graphs (CSKG) that store structured commonsense knowledge in the form of knowledge triplets.",
"The most widely used CSKG resources include ConceptNet (Speer et al., 2017), ATOMIC (Sap et al., 2019), and TransOMCS (Zhang et al., 2020).",
"There are also medical related knowledge bases such UMLS (Bodenreider, 2004) and OHAMA.",
"1 We use ConceptNet for commonsense and decide to collect a counseling knowledge base as general domain medical knowledge bases have a limited amount of knowledge aligning with our needs.",
"We present a model that leverages a combination of existing commonsense knowledge resources and domain-specific knowledge derived from the target domain.",
"The workflow is illustrated in Figure 3.",
"We focus on the task of generating dialog responses r using the dialogue context c and an external knowledge base K .",
"The dialogue context consists of a sequence of sentences c = ( x 1 , x 2 , ..., x M ) , which are M consecutive utterances in the dialogue.",
"The knowledge base K is a collection of triplets.",
"A triplet is denoted as (cid:15) i = ( e 1 , r, e 2 ) and its surface text form as s i , where e 1 and e 2 are entities and r is the relationship between them.",
"During the generation process, a set of knowledge k c relevant to c are provided to the model with parameters as additional input.",
"The task generate response y maximizing the conditional probability P ( r | c, k c ; ) .",
"In the following section, we describe the method to obtain relevant knowledge k c and the approach we use to incorporate knowledge into the language model.",
"Despite their large size, existing commonsense knowledge bases contain a limited amount of information on domain-specific concepts, especially for causal relationships such as the reason to take a medicine or its side effects.",
"In order to further investigate the effect of domain-specific knowledge in counseling response generation, we propose a pipeline to collect domain knowledge which requires no significant human labor involved.",
"The main steps are as follows.",
"We process each conversation utterance using Amazon Comprehend Medical to extract medical entities, along with their detection confidence scores, ranging between 0 to 1.",
"2 An example of entities extracted from a counseling dialogue is illustrated in Figure 1.",
"Given the distribution of the five medical entity categories in the dataset, shown in Figure 2, we decide to keep medical conditions, medications, tests and treatment procedures entities occurring at least two times, and experimentally set 0.6 as the threshold of confidence scores.",
"Additionally, we manually inspect the resulting entities and remove false positives and misspelled names.",
"After this process we obtain a set of 452 medical entities, distributed as 345 medical conditions, 44 references to medications, and 63 to tests and treatment procedures.",
"Knowledge Collection with Web Queries.",
"Next, we collect domain-specific knowledge relevant to the medical entities through web mining.",
"We compose a set of query templates around causal and intentional relationships frequently observed in the counseling conversations.",
"Each entity types identified during the extraction has a set of eleven distinct query templates as shown in Table 1.",
"Web search queries are constructed based on the templates, and searched on Google via the Zenserp API.",
"3 We keep only the top 100 matching websites for which we extract their text and parse it into sentences using the Spacy toolkit.",
"4 The resulting sentences with medical concepts are then considered as knowledge candidates during our next step.",
"Causal Relationship Classification.",
"In order to identify causal knowledge in our set of knowledge 2 https://aws.amazon.com/comprehend/ 3 https://zenserp.com/ 4 https://spacy.io/ candidates, we set up a binary classification task where we seek to determine whether a given sentence contains a causal relationship.",
"The positive samples used for this classifier consist of 1,331 sentences with cause-effect relationships (e.g., He had chest pains and headaches from mold in the bedrooms) from the SemEval10 Task 8 dataset (Hen-drickx et al., 2010) and an equal amount of negative samples randomly selected from sentences containing other types of semantic relationships in the same dataset.",
"The classifier is initialized with weights from the pretrained BERT-large model and later fine-tuned using the training set.",
"We run this classifier on our set of knowledge candidate sentences and keep sentences for which the classifier achieves confidence scores higher than 0.7, determined empirically through inspection on a small subset of samples.",
"The resulting set consist of 22,980 sentences containing medical concepts relevant to the counseling domain and their causal relationships.",
"To get external knowledge that provides useful information based on the dialogue context c , we assume that k c is semantically close to c .",
"We use embedding distance to model the semantic similarity between the context and knowledge in natural language.",
"More specifically, we use sentence-BERT(Reimers and Gurevych, 2019b) to get an embedding F ( x i ) for each of input sentence x i .",
"The pre-trained weights are obtained from the paraphrase-distilroberta model in the Sentence-Transformers library 5 .",
"We then select s j as relevant knowledge k c based on its cosine similarity to 5 https://www.sbert.net/ 3099 Figure 3: Overall pipeline of the proposed methods the context c .",
"k c = argmax s j K Sim ( F ( c ) , F ( s j )) (1) We test three sentence retrieval methods to select the most relevant sentences.",
"The first, retrieval-each consists of obtaining an k x i for each x i .",
"The second, retrieval-average , matches knowledge sentences based on the document embedding obtained by averaging all sentence embeddings (cid:80) F ( x i ) M .",
"We also test an oracle retrieval ( retrieval-diff ) that uses the difference between the input embedding in retrieval-average and output embeddings F ( y ) as the document embedding.",
"Since the sentence-BERT model is trained on natural language instead of structured data such as knowledge triplets, we convert all the triplets in ConceptNet into their surface text form.",
"We use templates built manually to replace the relation with a phrase, for example, triplet ( knife, CapableOf, cut ) becomes Knife is capable of cut.",
"We follow the practice in (Wolf et al., 2019) and incorporate the knowledge k c retrieved in the previous step by appending sentences in k c to the beginning of the context c .",
"They are separated with the special token </s> as BART use the RoBERTa tokenizer (Liu et al., 2019) for its pre-training.",
"We use BART-large as our baseline in the experiments.",
"To bypass the difficulty of matching text spans in the context to the knowledge base, we use a generative method to predict an entity e 2 in a knowledge triplet, based on the entity e 1 extracted from context c and a specified relationship r .",
"Compared with the retrieval method described in the previous section, this method has the benefit of being able to specify the type of relation in the knowledge triplet.",
"We can thus locate the knowledge relevant to specific tokens rather than the whole sentence.",
"To complete the knowledge triplet, we use COMET, a framework for automatic knowledge base construction.",
"This is a GPT model (Radford et al., 2018) finetuned on knowledge triplets from commonsense knowledge bases such as ConceptNet (Speer et al., 2017) and ATOMIC(Sap et al., 2019).",
"The model takes (cid:15) j = ( e 1 , r, ) as input and predicts e 2 to complete the knowledge triplet.",
"We use the original implementation 6 and the pretrained weights on ConceptNet.",
"For each utterance x i in the dialogue context, we use constituency parsing (Kitaev and Klein, 2018) to find the verb phrase and the noun phrase at depth one in the dependency tree, and use them as the input to the COMET model.",
"Following the categorization in (Hwang et al., 2021), we limit the relationships to the commonsense subset to reduce noise and to limit the number of generated knowledge triplets.",
"For noun phrases, the relations are mostly about their physical properties, such as UsedFor and CapableOf .",
"For verb phrases, we focus on the social-interaction or event-centered aspects, which include relations such as Causes and MotivatedByGoal .",
"For example, for the triplet ( loseweight, HasP rerequisite, ) the model predicts e 2 to be Eat less or Eat healthier .",
"A potential drawback of appending the knowledge at the beginning of the input is that we are not able to include information about knowledge locality as we can not tell the model which piece of the context the knowledge is corresponding to.",
"Therefore, we take inspiration from K-BERT (Liu et al., 2020) and adopt their representation method into our BART-based model, which is referred as K-BART.",
"We experiment with two ways to keep the structure information.We use BART-large as 6 https://github.com/atcbosselut/comet-commonsense 3100 the baseline, and test inserting r and e 2 without modifying the attention and positional embedding noted as inplace .",
"Soft Positional Encoding.",
"As BART's transformer layers follow the implementation of RoBERTa, it uses a learned positional embedding, which assigns a unique embedding vector to each location in the input and captures the sequential nature of the input.",
"For COMET generated knowledge, we plug in r and e 2 next to its corresponding e 1 in the original context.",
"Note that the input sentence is no longer a natural sentence, which is different from instances in pretraining.",
"Consider the following sentence with corresponding knowledge in brackets: I've been smoking [causes cancer] too much,.",
"This is usually regarded as two sentences: the original input I've been smoking too much and the introduced knowledge smoking causes cancer.",
"However, plain positional encoding scheme is not enough to represent this information.",
"Hence, we treat the input sequence as a tree structure, where the r and e 2 are treated as a branch to the original input at the location next to e 1 .",
"In this case, causes and too are both considered as the fourth token right after smoking.",
"With this approach, the main body of the sentence will have the same index as a sentence without additional knowledge.",
"Mask-Self-Attention.",
"The information introduced by a COMET generated knowledge triplet is only relevant to the first argument e 1 from the original context.",
"Therefore, we use attention mask to modify the visibility of each part in the input sequence, and hide the introduced knowledge from other irrelevant parts of the input.",
"The tokens in the dialogue context can see each other as usual, but the introduced knowledge r and e 2 are only visible to their corresponding e 1 , which means their attention weights are always 0 for other parts of the input.",
"In this way, unrelated tokens will not be affected by the semantics of introduced knowledge.",
"We choose BART as the backbone network for our generation model.",
"It is a standard seq2seq style transformer which achieved SoTA on multiple down stream tasks with a bidirectional encoder and a left-to-right decoder, which generalizes both GPT2 and BERT.",
"Each model is trained with three random seeds.",
"We use the dataset from (Prez-Rosas et al., 2016) on Motivational Interviewing for language model fine-tuning.",
"The dataset consists of 277 counseling sessions, covering different topics on behavior change, including smoking cessation and weight management.",
"It has annotations on counselor verbal behaviors, such as asking a question, making a reflective response, or seeking collaboration.",
"In the experiments, we form data samples with a reflective response as the target text y and use five former utterances within the counseling dialog as the context c .",
"That leaves us over 3000 samples after filtering.",
"We use ConceptNet as the knowledge base providing commonsense knowledge.",
"It has over 21 million knowledge triplets with a set of 34 relations covering a wide variety of knowledge, including attributional relationships, causal relationships, etc.",
"We only keep triplets that are in English and from a selected subset of relationships based on their semantic meanings, refer to the appendix for details.",
"This leaves us with a collection of about 3.4 million triplets.",
"We evaluate our model with several common metrics.",
"We measure the word-overlapping based relevance using BLEU-1/2 (Papineni et al., 2002), ROUGE-1/2 (Lin, 2004), and METEOR (Banerjee and Lavie, 2005).",
"We measure the contextual embedding similarity using BertScore (Zhang et al., 2019).",
"We measure the diversity with the ratio of unique unigrams or bigrams among generated sentences (Li et al., 2016).",
"We first examine how the knowledge from different retrieval methods benefits the system.",
"All the experiments use domain-specific knowledge as the data source.",
"Table 2 shows our experimental results.",
"The retrieval-each method using sentence-level embeddings exceeds the baseline on Rouge-1 and METEOR, while the retrieval-average method, using context-level embeddings of less granularity, outperforms other methods in BLEU-2, Rouge-2, and BertScore.",
"Meanwhile, the oracle method retrieval-diff unsurprisingly gets the highest score in all metrics by a large margin except Dist-1.",
"Overall, results indicate that it is feasible to find relevant information from a domain-specific knowl-3101 Model BLEU-1 BLEU-2 Rouge-1 Rouge-2 METEOR BertScore Dist-1 Dist-2 Baseline 11.67 1.38 18.94 3.04 8.90 85.39 0.21 1.90 Retrieval-each 10.68 1.16 20.33 2.99 9.13 85.36 0.19 1.73 Retrieval-avg 11.60 1.43 18.69 3.28 8.30 85.44 0.22 1.89 Retrieval-diff 13.63 1.80 24.23 5.24 11.41 85.99 0.21 2.01 Table 2: Performance of different retrieval methods to obtain relevant knowledge.",
"edge base to improve generation given the ground truth.",
"Next, we investigate whether knowledge from COMET, a generative approach, can provide additional context to the generation task.",
"We also evaluate whether masked attention Att or soft positional encoding Pos are better strategies to infuse knowledge by providing locality information of what tokens the knowledge is related to.",
"We show the results in Table 3.",
"The inplace method, which inserts the relation r and the generated e 2 next to e 1 , shows a significant improvement over the baseline.",
"More specifically, the improvement in Dist-1/2 suggests that commonsense stored in COMET can also be leveraged to introduce new words and concepts into the response.",
"Using masked attention provides further improvements in several automatic metrics, except for a slightly lower BLEU score.",
"Interestingly, the soft positional encoding worsens the performance regardless being used by itself or when combined with masked attention.",
"One potential explanation for this is that BART is more robust to masked attention as its effects are similar to attention dropout, while the soft positional encoding causes more position collision and requires more training samples to be effective.",
"After showing that both retrieved and generated knowledge helps to improve the generation of counseling responses, a natural question that follows is: how does the knowledge resource itself affect the overall performance?",
"and commonsense knowledge.",
"During our experiments, we use the retrieval-diff method, which can be seen as an upper-bound of performance using the actual ground truth response.",
"The knowledge candidates are obtained from either ConceptNet triplets in their surface text form or domain-related knowledge collected from the Internet as described in 3.2.",
"Domain Specific Knowledge vs Commonsense Knowledge.",
"As shown in Table 4 both domain-specific knowledge and commonsense knowledge serve as useful sources of knowledge resources for our generation task.",
"However, the model using ConceptNet performs significantly better than the model using domain-specific knowledge in all metrics except Dist-2.",
"One potential reason for this is that the sheer amount of commonsense knowledge is much larger than the amount we collect and has better coverage for what is mentioned in the dialogue context.",
"However, our experiments show that aggregating both types of knowledge further improves the system's performance.",
"This suggests the domain-specific knowledge provides complementary information relevant to the counseling domain, such as the side effect for a medication, that is not captured by the commonsense knowledge base.",
"Note that more than 20% of the retrieved sentences are from the domain-specific knowledge base, while the commonsense knowledge base is more than 30 times larger in size.",
"This further shows that our data collection pipeline is able to provide knowledge that is more relevant to the dialogue context, with the added benefit of no human annotation involved.",
"The Role of Different Types of Commonsense Knowledge.",
"We evaluate the role of different types of knowledge by conducting an ablation study based on the main categories in Conceptnet, including attribution, causal, comparison, conditional, intentional, spatial, and temporal categories.",
"We build separate models by removing a commonsense knowledge category at a time.",
"Results in Table 5 show that removing the intentional relationships harms the performance the most on Rouge-1/2 and METEOR, and removing the causal relationships leads to the lowest score on BLEU-1 and BertScore.",
"Interestingly, these relations are important for counseling conversations where the counselor usually infer the intention or causes behind their clients statements.",
"For instance, in smoke cessation counseling, counselors might be aware that the main reasons to quit are related to well-being or personal relationships.",
"Removing a few sets of relationships, such as Attribution or Temporal , causes minimal performance drop or even an improvement.",
"These results suggest that those relationships are not salient or introduce noise during the retrieval process.",
"We conduct a human evaluation where we ask annotators to indicate their preferences between our best performing models from both the retrieval and the generative settings, and a model without knowledge enhancement.",
"We evaluated each each model response using three metrics: Fluency indicating whether the sentence is grammatically correct and natural; Coherence indicating whether the response is on topic and relevant to the dialogue history; Reflectiveness indicating if the response summarizes what the client has said or interprets what the client means.",
"All these metrics are scored with a three-point Likert scale.",
"We also ask the annotators if the retrieved knowledge is helpful for generating a better response, where the knowledge is triplets for the generative setup and sentences for the retrieval setup.",
"In addi-3103 tion, we ask the annotators to pick the best response between our models and the ground truth.",
"We randomly choose 50 samples for each model to be annotated.",
"The annotation was conducted by two annotators using Qualtrics.",
"7 The annotators had no information on which model generated the the response being annotated.",
"Figure 4 shows the average score for each metric and the percentage of times each system was chosen as the best response.",
"Results show that the ground truth responses have the highest score in terms of reflectiveness and coherence .",
"A potential reason for this is that the ground truth responses are generally longer, thus containing more information from the dialogue context.",
"As for the best response, the ground truth was also the most picked one and our models using knowledge have not outperformed the baseline in this regard.",
"The model using generated knowledge triplets outperforms the baseline in all three metrics, suggesting the motivation and cause relationships generated by COMET brought useful context to the dialog.",
"However, only 22% of the triplets sampled from the test set are considered helpful by our annotators.",
"This calls for closer inspection on the difference between how the models take advantage of commonsense knowledge and how humans perceive it.",
"The model using retrieved knowledge assertions outperforms the baseline on fluency and reflectiveness but has a low coherence score.",
"Among the knowledge assertions, 38% of retrieved sentences are relevant to the dialog when using domain knowledge, and 48% for commonsense knowledge.",
"In this paper, we proposed the task of knowledge enhanced counseling reflection generation, and experimented with different ways to introduce knowledge into the reflection generation model using both retrieval and generative settings.",
"We found that both strategies benefit the generation task on various automatic metrics, which is further consolidated by the human evaluation.",
"In addition, we showed that counseling domain knowledge serves as good complementary knowledge source to ConceptNet.",
"Through an ablation study, we found that commonsense related to intentional and causal relationships is essential for the counseling domain.",
"We are grateful to Kenneth Resnicow and Larry An for their expert input on the importance of domain knowledge in counseling.",
"This material is based in part upon work supported by the Precision Health initiative at the University of Michigan, by the National Science Foundation (grant #1815291), and by the John Templeton Foundation (grant #61156).",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Precision Health initiative, the National Science Foundation, or John Templeton Foundation."
] | [
"method",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"result",
"other",
"other",
"other"
] |
[
"Most previous methods for text data augmentation are limited to simple tasks and weak baselines.",
"We explore data augmentation on hard tasks (i.e., few-shot natural language understanding) and strong baselines (i.e., pretrained models with over one billion parameters).",
"Under this setting, we reproduced a large number of previous augmentation methods and found that these methods bring marginal gains at best and sometimes degrade the performance much.",
"To address this challenge, we propose a novel data augmentation method FlipDA that jointly uses a generative model and a classifier to generate label-flipped data.",
"Central to the idea of FlipDA is the discovery that generating label-flipped data is more crucial to the performance than generating label-preserved data.",
"Experiments show that FlipDA achieves a good tradeoff between effectiveness and robustnessit substantially improves many tasks while not negatively affecting the others.",
"1 1 Introduction Data augmentation is a method to augment the training set by generating new data from the given data.",
"For text data, basic operations including replacement, insertion, deletion, and shuffle have been adopted widely and integrated into a wide range of augmentation frameworks (Zhang et al., 2015; Wang and Yang, 2015; Xie et al., 2020a; Kobayashi, 2018; Wei and Zou, 2019).",
"Generative modeling methods such as back-translation have also been employed to generate augmented samples (Fadaee et al., 2017; Sennrich et al., 2016).",
"However, there are two major limitations.",
"First, some general augmentation methods are based on weak baselines without using large-scale pretrained language models.",
"Recent work showed that some of The authors have contributed equally to this work.",
"the data augmentation methods are less useful when combined with large pretrained models (Longpre et al., 2020).",
"Second, most prior studies are carried on simple tasks such as single-sentence classification where it is easier to generate legit augmented samples.",
"For harder tasks such as natural language inference (e.g., telling whether sentence A entails sentence B), it is not clear whether previous methods still help.",
"This work takes a step further to study data augmentation under strong baselines and hard tasks.",
"Our study employs large-scale pretrained language models such as DeBERTa (He et al., 2020c) with over one billion parameters as baselines.",
"Moreover, we target a very challenging settingfew-shot natural language understanding (NLU).",
"Following (Schick and Schutze, 2021), we consider challenging NLU tasks including question answering, textual entailment, coreference resolution, and word sense disambiguation, and use only 32 training examples for each task.",
"Under this setting, we reproduced several widely-used prior methods for data augmentation.",
"Our experiments lead to two unexpected discoveries: (1) most of prior augmentation methods bring only marginal gains at best and are not effective for most tasks; (2) in many cases, using data augmentation results in instability in performance and even entering a failure mode; i.e., performance may drop by a lot or fluctuate severely depending on which pretrained model is used.",
"The above issues prevent these augmentation methods from practical usage for few-shot learning.",
"We propose a novel method FlipDA that achieves both effectiveness and robustness for hard few-shot tasks.",
"Preliminary experiments showed that label-flipped data often largely improve the generalization of pretrained models, compared to augmented data that preserve the original labels.",
"Based on this observation, FlipDA first generates data using word substitution based on a pretrained T5 (Raffel et al., 2020) and uses a classifier to select label-flipped 8646 data.",
"Experiments demonstrate FlipDA substantially improves performance on many of the hard tasks, outperforming previous augmentation baselines in terms of average performance by a large margin.",
"Moreover, FlipDA is robust across different pretrained models and different tasks, avoiding failure modes.",
"Data Augmentation.",
"An important type of augmentation methods are based on word substitution , such as synonym replacement (Zhang et al., 2015), KNN replacement (Wang and Yang, 2015; Vija-yaraghavan et al., 2016), Unif replacement (Xie et al., 2020a), TF-IDF replacement (Xie et al., 2020a), Bi-RNN replacement (Kobayashi, 2018), and other entity replacement methods (Raiman and Miller, 2017; Miao et al., 2020; Yue and Zhou, 2020) etc.",
"EDA (Wei and Zou, 2019) combines four simple augmentation methods and back translation (BT) (Fadaee et al., 2017; Sennrich et al., 2016; Yu et al., 2018) is also widely used.",
"Unfortunately, EDA and BT are shown to be less useful with large pretrained models (Longpre et al., 2020).",
"Some augmentation methods are based on the perturbation in the feature space (Zhang et al., 2018a; Guo et al., 2020; Chen et al., 2020b,a; Miao et al., 2020; Kumar et al., 2019).",
"Generation (Xia et al., 2020; Li et al., 2019; Yoo et al., 2019; Ng et al., 2020; Liu et al., 2020; Hou et al., 2018) based methods are also proposed for better data diversity.",
"In addition, large pretrained models have been used for data augmentation.",
"(Kumar et al., 2020) utilize large pretrained models, such as GPT-2, BERT, and BART, for conditional data augmentation.",
"LAMBADA (Anaby-Tavor et al., 2020) finetunes a GPT-2 model with the priming technique to get augmented examples.",
"GPT3Mix (Yoo et al., 2021) uses GPT-3 along with prompting to generate augmented data for classification tasks.",
"Our method is similar to this line of work in that we also use pretrained models for generating augmented data.",
"However, there are the following key differences.",
"First, it is challenging for these prior methods to handle long sequences or multiple sentences.",
"In our preliminary experiments, we were not able to use these methods to generate proper data samples (see details in Section 4).",
"Second, besides generating augmented samples, we found it crucial to use label-flipped data for augmentation, which is a unique and critical aspect of FlipDA.",
"Self-training.",
"Self-training (III, 1965) iteratively augments training data by labeling unlabeled data with a trained model (Yarowsky, 1995; Riloff, 1996).",
"Knowledge distillation and pseudo-labeling are special forms of self-training (Hinton et al., 2015; Lee et al., 2013; Reed et al., 2015).",
"Strong data augmentation (Zoph et al., 2020), equal-or-larger model (Xie et al., 2020b), additional noise (Xie et al., 2020b; He et al., 2020a), and feedback of the student's performance (Pham et al., 2020) are helpful for self-training.",
"Self-training bears similarity to the second phase of FlipDA where a teacher model is used to filter samples.",
"Different from self-training, FlipDA leverages the advantages of label flipping to improve performance and does not rely on unlabeled data.",
"Label Flipping.",
"Our manual label flipping augmentation procedure is analogous to (Kaushik et al., 2020) and (Gardner et al., 2020).",
"Kaushik et al. (2020) aimed to mitigate the effects of learning spurious features.",
"Gardner et al. (2020) targeted reducing systematic gaps in the dataset.",
"In contrast, we target improving few-shot generalization.",
"Moreover, we measure the performance on an existing i.i.d. test set while Kaushik et al. (2020) and Gardner et al. (2020) created more challenging test sets.",
"Most importantly, we propose an automatic method of label flipping, going beyond manual efforts.",
"Contrastive Learning.",
"FlipDA is connected to contrastive learning (CL) (He et al., 2020b; Chen et al., 2020c) in that they both improve generalization by considering label differences.",
"CL uses data augmentation to generate positive instances and uses samples existing in the dataset as negative samples, while FlipDA shows that negative samples can be automatically generated.",
"While previous work on CL focuses on training with large datasets, our experiments show that augmenting a small dataset can improve few-shot generalization.",
"It could be intriguing to see whether such a connection might lead to advances in both fields, e.g., generating negative samples for large-scale contrastive pretraining.",
"Few-Shot NLU Tasks.",
"This work considers a collection of difficult NLU tasks from SuperGLUE (Wang et al., 2019) that require in-depth understanding of the input in order to obtain 8647 high performance, including coreference resolution (Levesque et al., 2011), causal reasoning (Gordon et al., 2012), textual entailment (de Marneffe et al., 2019; Dagan et al., 2005), word sense disambiguation (Pilehvar and Camacho-Collados, 2019), and question answering (Clark et al., 2019; Khashabi et al., 2018; Zhang et al., 2018b).",
"Following Schick and Schutze (2021), we used only 32 training examples to construct a few-shot setting to further increase the difficulty.",
"Large-Scale Pretrained Models.",
"Our setting assumes a large-scale pretrained language model (De-vlin et al., 2019; Lan et al., 2020; He et al., 2020c) is available and few-shot learning is performed based on the pretrained model.",
"This setting is crucial since previous studies found that using a strong pretrained model as the baseline eliminates the ben-efits of data augmentation (Longpre et al., 2020) while large pretrained models are becoming more and more available.",
"Our main result is based on DeBERTa (He et al., 2020c) with over one billion parameters.",
"We also provide results with ALBERT which has fewer parameters (Lan et al., 2020).",
"Our preliminary experiments with a large number of previous methods (in Section 4) lead to a conclusion that there is not an effective and robust method available for this hard setting.",
"We will discuss how we tackle this challenge by proposing a novel data augmentation method FlipDA in later sections.",
"We propose key desiderata for data augmentation methods under the setting of few-shot learning.",
"1. Effectiveness.",
"A data augmentation method should be able to improve performance on certain tasks in a significant manner.",
"2. Robustness.",
"A data augmentation method should not suffer from a failure mode in all cases.",
"Failure modes are common for few-shot learning where some minor changes might cause substantial performance drop.",
"We argue this should be used as a key evaluation metric.",
"We consider two types of robustness: (1) robustness w.r.t. different base pretrained models and (2) robustness w.r.t. various tasks.",
"Since previous methods are not sufficiently effective and robust in our preliminary experiments (see",
"Tables 5 and 6 in Section 4 for details), we use manual augmentation to investigate what kind of augmented data is beneficial for large pretrained models in the few-shot setting.",
"We mainly study two types of data augmentationone that preserves the labels and the other that flips the labels.",
"Since manual augmentation is time consuming, we select a subset of representative SuperGLUE tasks here.",
"To augment label-flipped data, the following principle is appliedmaking minimal changes to the original text sample to alter the label.",
"Augmentation includes word addition, deletion, and substitution.",
"To augment label-preserved data, we substitute some of the words with semantically similar words but make sure that the label is unchanged.",
"Results are shown in Table 1. 2 Flipping labels substantially improves performance on three of the tasks by up to 10 points, while preserving the labels only has minor gains.",
"In contrast, many prior methods on data augmentation focus on creating data examples that are assumed to have the same labels as the original ones.",
"This might explain why previous augmentation methods are not sufficiently effective for the few-shot setting.",
"Some of the label-flipped augmented examples are shown in Table 2. We conjecture that label flipping augmentation provides useful information about the important components in a sentence that determine the label.",
"In other words, augmented samples provide intermediate supervision that explains the predictions, improving generalization in a few-shot setting.",
"There is a caveat about this manual augmentation experiment.",
"Although we follow certain principles and pay much attention to the augmentation quality, the manual augmentation procedure is inevitably 2 For each original example, we produce one augmented example for each type.",
"The augmented data and the original data are combined for training.",
"Following Schick and Schutze (2021), we train each pattern with three seeds and ensemble these (pattern, seed) pairs.",
"We repeat this ensemble process 3 times and report their mean and standard deviation.",
"subjective and hard to reproduce.",
"For reference, we will make our manually augmented dataset publicly available.",
"More importantly, we will design an automatic method (FlipDA) in the following sections for objective evaluation and reproducibility.",
"We also analyze why augmentation methods usually suffer from failure modes.",
"Most augmentation methods are based on a label preserving assumption, while it is challenging for automatic methods to always generate label-preserved samples.",
"We first examine the samples generated by prior automatic methods EDA (Wei and Zou, 2019) and KNN (Wang and Yang, 2015) in Table 4. In the first example, the keyword rabies is deleted, which not only results in a grammatically incorrect expression but also eliminates the key information to support the hypothesis.",
"In the second example, the Lake Titicaca is replaced by Lake Havasu, which results in a label change from entailment to non-entailment.",
"If a model is trained on these noisy augmented data with the label preserving assumption, performance degradation is expected.",
"We further experimented with EDA (Wei and Zou, 2019) on the RTE task (Dagan et al., 2005) to verify the cause of failure modes.",
"Using EDA decreases the performance by a few percentage points with both ALBERT and DeBERTa, entering a failure mode.",
"We identified two types of noise in the augmented samples: (1) grammatical errors that lead to the difficulty of understanding and (2) mod-ification of key information that alters the labels.",
"We experimented with (1) replacing these noisy samples with the original ones and (2) correcting the labels of the noisy samples.",
"3 As Table 3 shows, 3 For label correction, if a sample has severe grammatical mistakes and is not understandable by human, we always mark it as not entailment.",
"This is related to an interesting phenomenon that label flipping is usually asymmetric for NLU tasks.",
"We will discuss more of the phenomenon in Section 4.5.",
"both replacing and correcting noisy samples largely improve performance to prevent the failure mode.",
"Moreover, correcting the labels brings large gains, indicating label flipping tends to alleviate the issue.",
"To reiterate, these experiments involve subjective factors and are merely meant to show the intuition of FlipDA, rather than proving its superiority.",
"Observations in Sections 3.3 and 3.4 show that label-flipping could benefit few-shot NLU in both effectiveness and robustness.",
"Reducing grammatical errors is also key to preventing failure modes.",
"This motivates our development of FlipDA that automatically generates and selects label-flipped data without label-preserving assumption.",
"FlipDA consists of 4 steps as shown in Figure 1: 1. Train a classifier (e.g., finetuning a pretrained model) without data augmentation.",
"2. Generate label-preserved and label-flipped augmented samples.",
"3. Use the classifier to select generated samples with largest probabilities for each label.",
"4. Retrain the classifier with the original samples and the additional augmented samples.",
"Formally, given a few-shot training set { ( x i , y i ) } i where x i is text (possibly a set of text pieces or a single piece) and y i Y is a label.",
"We finetune a pretrained model f to fit the conditional probability for classification f ( x, y ) = p ( y | x ) .",
"In the second step, we generate augmented samples 8649 Table 4: Augmented example with wrong labels.",
"from the original ones.",
"For each training sample x i , we generate a set of augmented samples S i = { x i, 1 , x i, 2 , } .",
"In our implementation, we first use a cloze pattern (Schick and Schutze, 2021) to combine both x and y into a single sequence, and then randomly mask a fixed percentage of the input tokens.",
"This is followed by employing a pretrained T5 model (Raffel et al., 2020) to fill the blanks to form a new sample x (cid:48) (see Appendix A.3 for more details).",
"We find it beneficial to remove the sample if T5 does not predict y given x (cid:48) .",
"Note that using T5 to generate augmented samples does introduce additional knowledge and reduce grammatical errors, but naively using T5 for augmentation without label flipping and selection does not work well (see ablation study in Section 4).",
"After generating the augmented samples, we use the classifier f for scoring.",
"Specifically, let S i be a set of augmented samples generated from the original sample ( x i , y i ) .",
"which contains all augmented samples with y (cid:48) being highest-probability class.",
"Given the set S i,y (cid:48) , we select the sample with the highest predicted probability x (cid:48) , y (cid:48) = arg max x S i,y (cid:48) ,y = y (cid:48) p ( y | x ) where x (cid:48) is a sample in the generated set, y (cid:48) is the flipped label, and the estimated probability p ( y (cid:48) | x (cid:48) ) scored by the model f is the largest in S i,y (cid:48) .",
"After selecting the label-flipped example ( x (cid:48) , y (cid:48) ) , we add ( x (cid:48) , y (cid:48) ) to the augmented training set.",
"In other words, we only add an example into the training set if the model f considers the flipped label to be correct.",
"We apply this procedure to each possible label y (cid:48) (cid:54) = y i .",
"In case S i,y (cid:48) is empty, we do not add any examples to the training set.",
"In practice, we 8650 find it beneficial to also add the example with the highest probability of label preserving, using the same procedure.",
"After augmenting the training set, we retrain the classifier f to obtain the final model.",
"Baselines.",
"We take seven augmentation methods as the baseline, including Synonym Replacement (SR) (Zhang et al., 2015), KNN Replacement (KNN) (Wang and Yang, 2015), Easy Data Augmentation (EDA) (Wei and Zou, 2019), Back Translation (BT) (Fadaee et al., 2017), TinyBERT (T-BERT) (Jiao et al., 2019), T5-MLM, and MixUP (Zhang et al., 2018a).",
"For more details about baseline selection and implementation, please refer to Appendix A.2.",
"Evaluation Protocol We evaluate augmentation methods based on PET (Schick and Schutze, 2021).",
"Following PET, we take a set of pre-fixed hyper-parameters (see Appendix A.1).",
"Considering few-shot learning is sensitive to different patterns and random seeds (Dodge et al., 2020; Schick and Schutze, 2021), we reported the average performance over multiple patterns and 3 iterations.",
"We evaluate FlipDA on 8 tasks with 2 pretrained models.",
"For effectiveness, we use exactly the same metrics (i.e., accuracy, F1, and EM) as PET (Schick and Schutze, 2021).",
"For robustness, we propose a new metric MaxDrop (MD), which measures the maximum performance drop compared to not using augmentation over multiple tasks for a given method.",
"Given tasks t 1 ,..., t n , a target method M , and a baseline method MB , MD is defined as MD= max t { t 1 ,...,t n } max(0 , score t,M B score t,M ) , where score t,M ( score t,M B ) denotes the performance of method M ( MB ) on task t .",
"Smaller values indicate better robustness w.r.t tasks.",
"Results are presented in Table 5 and Table 6.",
"We observe that FlipDA achieves the best performance among all data augmentation methods in both effectiveness (Avg.) and robustness (MD) on both ALBERT-xxlarge-v2 and DeBERTa-v2-xxlarge.",
"Specifically, FlipDA achieves an average performance of 74.63 on ALBERT-xxlarge-v2 and an average of 80.23 on DeBERTa-v2-xxlarge, both of which outperform baselines by around 3 points.",
"It suggests FlipDA is effective in boosting the performance of few-shot tasks by augmenting high-quality data without causing too many side effects.",
"FlipDA shows improvements on all tasks except WSC, while all the other methods only work on a few tasks (denoted with underlines).",
"Such observations are consistent with the MaxDrop results, where FlipDA achieves the lowest MaxDrop value of 0.0 on ALBERT-xxlarge-v2 and 1.28 on DeBERTa-v2-xxlarge.",
"This implies FlipDA is robust to different types of tasks, while other augmentation methods could only be effective for partial tasks and not sufficiently robust.",
"Effectiveness of Pattern-based Data Cloze To study different methods of obtaining candidate augmented data, we feed candidates obtained by different methods into the same classifier (as FlipDA uses).",
"Table 6 shows the ablation results.",
"FlipDA outperforms all the other baseline methods with a classifier (i.e., with FlipDA cls).",
"Other methods of obtaining augmented data candidates cannot reach similar performance as FlipDA when combining with FlipDA classifier, which proves the effectiveness of our pattern-based data cloze strategy with T5.",
"Reasons could be that T5-based augmentation produces samples with fewer grammatical errors.",
"(will further discuss in Sec 4.7).",
"Moreover, T5-style blank filling could produce samples that are more compatible with label flipping.",
"Effectiveness of FlipDA Classifier We then compare the performance of different methods with and without the FlipDA classifier.",
"According to Table 6, most baseline methods with the FlipDA classifier outperform the original version in terms of both effectiveness (Avg.) and robustness (MD).",
"This demonstrates that the FlipDA classifier which is capable of flipping labels and filtering data is effective in augmenting high-quality data and improving few-shot NLU performance.",
"The only exception is BT-6.",
"The reason could be data augmented by back translation usually lack diversity and are less likely to change labels, and using the FlipDA classifier further decreases diversity and hurts its performance.",
"The improvement brought by the FlipDA classifier is more consistent on BoolQ, RTE, and MultiRC.",
"This may be because these tasks involve predicting a single token with two opposite choices, and thus label flipping might happen more often.",
"Some of the other tasks such as COPA and WSC in-8651 Table 5: Performance of baseline methods and FlipDA based on PET and ALBERT-xxlarge-v2 (baseline denotes the original PET with no data augmentation. Underline denotes values that outperform baseline. Bold denotes the best-performed ones of the task).",
"volve predicting multiple tokens, which makes generating label-flipped data more difficult.",
"This leads to less substantial improvement on these tasks.",
"A follow-up question is how label-flipped data and label-preserved data respectively contribute to the overall improvements.",
"We run decoupling label-flipped data and label-preserved data.",
"Results are in Table 7, where bold text represents the best-performed methods.",
"We conclude that augmenting both label-flipped and label-preserved data leads to the best average performance.",
"Besides, values with underlines denote the second-best performance, most of which are augmenting only label-flipped data.",
"Augmenting only label-preserved data leads to the worst performance, even slightly underperforming the non-augmentation baseline.",
"This demonstrates the high effectiveness of label-flipping.",
"This aligns well with our analysis in Section 3.3.",
"More results on ALBERT are in A.7.2.",
"Section 4.4 proves that label-flipped augmented data are more effective in improving few-shot performance than label-preserved ones.",
"It is even more intriguing to study which direction of label flipping is able to benefit the few-shot performance to the maximum extent.",
"We experiment with 4 binary 8652 Table 7: Ablation study on label-flipped data",
"classification tasks, i.e., RTE, BoolQ, WiC, and MultiRC.",
"Each task has 4 directions of label transformation.",
"We conduct experiments that augment data in each of the four directions respectively and compare their effectiveness.",
"Results on DeBERTa are shown in Table 8, and results on ALBERT are in Appendix A.7.3.",
"We can see that some tasks are asymmetric, i.e., transforming in one direction is more beneficial than the other, such as BoolQ, RTE, and WiC.",
"We conjecture that it is because it is relatively easy for a model to generate samples with answers in some direction (from yes to no in BoolQ, from 'en-tailment' to not entailment in RTE, and so on).",
"While some tasks are symmetric, i.e., the difference between the two directions is not significant, such as MultiRC.",
"On all tasks, even though some direction is better than others, augmenting with only one direction will affect the label distribution.",
"This will likely lead to a lower performance than the baseline.",
"Augmenting with all directions is still necessary for the best performance.",
"We propose four plausible strategies for augmented data selection, and quantitatively evaluate them.",
"The four strategies are described as follows.",
"2. Global Top K .",
"For each label transformation direction, all the candidate augmented data are gathered and sorted by their predicted probabilities, and the topK ( or topr % ) samples with the highest probabilities are selected.",
"3. Global Top P .",
"Similar to Global Top K , but augmented data with predicted probabilities higher than a threshold P are selected.",
"4. Diverse Top K .",
"Similar to Global Top K except that a mechanism is used to balance between the original samples.",
"Concretely, we first select the top-1 augmented samples of each original sample (ranked by decreasing probabilities), and then select the top-2, top-3, etc, until K samples have been selected.",
"Since FlipDA can be viewed as a self-training algorithm, we also add a self-training algorithm Noisy Student (Xie et al., 2020b) as another baseline.",
"We treat the augmented data as unlabeled data and add noises with a dropout rate of 0.1.",
"Table 9 shows the results of different strategies on different tasks.",
"More results on ALBERT are in A.7.4.",
"For Global Top P , we set the threshold P at 0.9 or 0.95, whichever is better.",
"For Global Top K and Diverse Top K , we select the top 10% or 20% augmented examples, whichever is better.",
"Our strategies outperform Noisy Student.",
"Among our four data selection strategies, the Default strategy and Diverse Top K perform the best.",
"Both methods emphasize diversity by using augmented data from different samples.",
"This demonstrates the importance of data diversity and balance for augmented data selection.",
"We show four augmented cases on the RTE task by FlipDA in Table 10.",
"Please refer to Appendix A.8 for more augmented examples.",
"In the first case, we can see that the T5-model changes the name of the tropical storm from Debby to Maria, and it also changes the trop-ical storm to its hypernym hurricane, and all these changes contribute to a different expression 8653 Table 9: Results of different strategies for choosing augmented data on DeBERTa (xxlarge).",
"without affecting its label.",
"The second case adds not to the premise and therefore the label flips.",
"The third case changes dwindles to its antonym increased, and then the label changes from Not Entailment to Entailment.",
"The last case changes the future tense to the simple past tense, April to March, and May to April correspondingly, without affecting its label.",
"We can see that the way to change or keep the label is rich and natural.",
"Moreover, the generation quality is improved compared to cases generated by EDA in Table 4, which also addresses the concerns of generation quality raised in Section 3.4.",
"We propose to study few-shot NLU based on large-scale pretrained models.",
"Two key desiderata, i.e., effectiveness and robustness, are identified.",
"Based on the empirical insight that label flipping improves few-shot generalization, we propose FlipDA with automatic label flipping and data selection.",
"Experiments demonstrate the superiority of FlipDA, outperforming previous methods in terms of both effectiveness and robustness.",
"In the future, it will be crucial to theoretically understand why and how generating label-flipped data in the neighborhood of existing data points improves generalization.",
"Moreover, increasing the diversity and quality of augmented data generation is also an important long-term goal.",
"Zhou and Li are supported in part by the National Natural Science Foundation of China Grant 62161146004, Turing AI Institute of Nanjing and Xi'an Institute for Interdisciplinary Information Core Technology.",
"Tang is funded by NSFC for Distinguished Young Scholar (61825602).",
"Zheng is Funded by China Postdoctoral Science Foundation (2021M690471)."
] | [
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"method",
"objective",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Ontologies compartmentalize types and relations in a target domain and provide the semantic backbone needed for a plethora of practical applications.",
"Very often different ontologies are developed independently for the same domain.",
"Such parallel ontologies raise the need for a process that will establish alignments between their entities in order to unify and extend the existing knowledge.",
"In this work, we present a novel entity alignment method which we dub DeepAlignment .",
"DeepAlignment refines pre-trained word vectors aiming at deriving ontological entity descriptions which are tailored to the ontology matching task.",
"The absence of explicit information relevant to the ontology matching task during the refinement process makes DeepAlignment completely unsupervised.",
"We empirically evaluate our method using standard ontology matching benchmarks.",
"We present significant performance improvements over the current state-of-the-art, demonstrating the advantages that representation learning techniques bring to ontology matching.",
"Translation across heterogeneous conceptual systems is an important challenge for cognitive science (Goldstone and Rogosky, 2002; Stolk et al., 2016).",
"Ontology Matching constitutes the task of establishing correspondences between semantically related entities (i.e. classes and properties) from different ontologies, as illustrated in Figure",
"1. Similarly, ontology matching is crucial for accomplishing a mutual understanding across heterogeneous artificial cognitive agents (Taylor, 2015).",
"However, despite the many proposed solutions, it is widely accepted that there is no solution robust enough to deal with the high ontological linguistic variability (Shvaiko and Euzenat, 2008, Entity Conference Dinner Television Episode Entity MealMenu Banquet TV Episode Figure 1: Example of alignments (black lines) and misalignments (red crossed lines) between ontologies.",
"2013); hampering, thus, the discovery of shared meanings.",
"Research in automatic ontology matching has focused on engineering features from terminological, structural, extensional (ontology instances) and semantic model information extracted from the ontological model.",
"These features are then used to compute ontological entity similarities that will guide the ontology matching.",
"Deriving such features for a given problem is an extremely time consuming task.",
"To make matters worse, these features do not transfer in other domains.",
"As Cheatham and Hitzler (2013) have recently shown, the performance of ontology matching based on different textual features varies greatly with the type of ontologies under consideration.",
"At the same time, machine learning research is characterised by a shift from feature engineering based approaches to feature and representation learning as a result of the performance improvements brought by deep learning methods.",
"A by now classical example is the unsupervised learning of semantic word representations based on the distributional hypothesis (Harris, 1954), i.e. the assumption that semantically similar or related words appear in similar contexts (Deerwester et al., 1990; Bengio et al., 2003; Mikolov et al., 2013a,c; Pennington et al., 2014).",
"Word vectors have the potential to bring significant value to on-787 tology matching given the fact that a great deal of ontological information comes in textual form.",
"One drawback of these semantic word embeddings is that they tend to coalesce the notions of semantic similarity and conceptual association (Hill et al., 2016b).",
"For instance, the word harness is highly related to the word horse, as they share strong associations, i.e. a harness is often used on horses (Lofi, 2016).",
"From an ontological point of view, however, these types should not be similar.",
"Moreover, as unsupervised learning requires even larger text corpora, the learned vectors tend to bring closer words with similar frequency instead of similar meaning (Faruqui et al., 2016).",
"Clearly, word representations that reflect frequency instead of meaning is an undesired feature if we seek to exploit word vectors for ontology matching; alignment based on such representations will reflect similar frequency instead of similar meaning.",
"A number of lightweight vector space representation refining techniques were introduced recently in an effort to correct these biases (Faruqui et al., 2015; Mrksic et al., 2016).",
"They use synonymy and antonymy constraints extracted from semantic lexicons to refine the learned word representations and make them better suited for semantic similarity tasks.",
"Such methods are a way to inject domain-specific knowledge to tailor the learned word representations to a given task.",
"As a result, we can exploit the synonymy/antonymy constraints to learn semantic word representations that are better candidates for ontology matching.",
"In this paper we learn representations of ontological entities instead of feature engineering them.",
"We use the learned representations to compute the entities' semantic distances and to subsequently perform the ontology matching task.",
"In order to represent the ontological entities, we exploit the textual information that accompanies them.",
"We represent words by learning their representations using synonymy and antonymy constraints extracted from general lexical resources and information captured implicitly in ontologies.",
"We cast the problem of ontology matching as an instance of the Stable Marriage problem (Gale and Shapley, 1962) using the entities semantic distances.",
"Our approach has a number of advantages.",
"The word embeddings we establish are tailored to the domains and ontologies we want to match.",
"The method relies on a generic unsupervised representation learning solution which is important given the small size of training sets in ontology matching problems.",
"We evaluate our approach on the Conference dataset provided by the Ontology Alignment Evaluation Initiative (OAEI) campaign and on a real world alignment scenario between the Schema.org and the DBpedia Ontologies.",
"We compare our method to state-of-the-art ontology matching systems and show significant performance gains on both benchmarks.",
"Our approach demonstrates the advantages that representation learning can bring to the task of ontology matching and shows a novel way to study the problem in the setting of recent advances in NLP.",
"The vast majority of ontology matching research follows the feature engineering approach (Wang and Xu, 2008; Cruz et al., 2009; Khadir et al., 2011; Jimenez-Ruiz and Grau, 2011; Fahad et al., 2012; Ngo and Bellahsene, 2012; Gulic et al., 2016).",
"Features are generated using a broad range of techniques (Anam et al., 2015; Harispe et al., 2015), ranging from the exploitation of terminological information, including structural similarities and logical constraints, such as datatype properties, cardinality constraints, etc.",
"Ontology matching is done by acting on the aforementioned features in different ways.",
"Heuristic methods that rely on aggregation functions, such as max, min, average, weighted sum , etc., to fuse the information found in these features are quite popular (Anam et al., 2015).",
"Other approaches use first order logic and cast ontology matching as a satisfiability problem (Giunchiglia et al., 2004; Jimenez-Ruiz and Grau, 2011).",
"Several works exploit supervised machine learning for Ontology Matching.",
"Mao et al. (2011) cast ontology mapping as a binary classification problem.",
"They generate various domain indepen-dent features to describe the characteristics of the entities and train an SVM classifier on a set which provides positive and negative examples of entity alignments.",
"In general, the number of real alignments is orders of magnitude smaller than the number of possible alignments which introduces a serious class imbalance problem (Mao et al., 2008) hindering learning.",
"Since we only use supervision to refine the word vector representations we avoid altogether the class imbalance problem.",
"Deep learning has so far limited impact on ontology matching.",
"To the best of our knowledge, only two approaches, (Zhang et al., 2014; Xiang et al., 2015), have explored the use of unsupervised deep learning techniques.",
"Zhang et al. (2014) are considered to be the first ones that use word vectors in ontology matching.",
"They train word2vec (Mikolov et al., 2013a) vectors on Wikipedia.",
"They use the semantic transformations to complement the lexical information, i.e. names, labels and comments, describing entities.",
"Their entity matching strategy is based on maximum similarity; for every entity e in the source ontology O , the algorithm finds the most similar entity e 0 in the target ontology O 0 .",
"Their experiments on the OAEI benchmarks show that their techniques, even when combined with classical NLP techniques, could not outperform the state-of-the-art.",
"In contrast, we refine pre-trained word embeddings with the intention of leveraging a new word vector set that is tailored to the ontology matching task.",
"Xiang et al. (2015) propose an entity representation learning algorithm based on Stacked Auto-Encoders (Bengio et al., 2007).",
"To describe an entity they use a combination of its class ID, labels, comments, properties descriptions and its in-stances' descriptions.",
"The entities' similarity is computed with a fixed point algorithm.",
"They perform the entity matching using the Stable Marriage algorithm.",
"Training such powerful models with so small training sets is problematic.",
"We overcome this by using a transfer learning approach, known to reduce learning sample complexity (Pentina and Ben-David, 2015), to adapt pre-trained word vectors to a given ontological domain.",
"We present an ontology matching approach that uses information from ontologies and additional knowledge sources to extract syn-onymy/antonymy relations which we use to refine pre-trained word vectors so that they are better suited for the ontology matching task.",
"We represent each ontological entity as the bag of words of its textual description, which we complement with the refined word embeddings.",
"We match the entities of two different ontologies using the Stable Marriage algorithm over the entities' pairwise distances.",
"We compute the aforementioned distances using a variant of a document similarity metric.",
"Before we proceed with the presentation of the method, we will provide a formal definition of what an entity correspondence is.",
"Given two ontologies O and O 0 , we define the correspondence between two entities e O and e 0 O 0 as the five-element tuple: cor e,e 0 = < id, e, e 0 , r, n > (1) where r is a matching relation between e and e 0 (e.g., equivalence, subsumption) and n [0 , 1] is the degree of confidence of the matching relation between e and e 0 (Euzenat and Shvaiko, 2013).",
"The id holds the unique identifier of the mapping.",
"Unlike the majority of ontology alignment systems which discover one-to-one equivalence mappings (Anam et al., 2015), we focus on discovering many-to-many mappings.",
"We will also introduce some additional notation used in the paper.",
"Let u 1 , u 2 R d be two d -dimensional vectors, we compute their cosine distance as follows: d ( u 1 , u 2 ) = 1 cos ( u 1 , u 2 ) .",
"For x R , we define the rectifier activation function as: ( x ) = max( x, 0) .",
"The counter-fitting method (Mrksic et al., 2016) uses synonymy and antonymy relations extracted from semantic lexicons to refine and adapt pretrained word embeddings for given semantic similarity tasks.",
"We broaden the concept of antonymy relations and allow for a larger class of ontology relations to define antonymies.",
"This allows us to inject domain knowledge encoded in ontologies and produce more appropriate word vectors for the ontology matching task.",
"In the rest of the section we revise the main elements of the counterfitting method and describe how we can exploit it for learning domain specific word embeddings.",
"Let V = { v 1 , v 2 , . . . v N } be an indexed set of word vectors of size N .",
"The counter-fitting method transforms a pretrained vector set V into a new one V 0 = { v 0 1 , v 0 2 , . . . v 0 N } , based on a set of synonymy and antonymy constraints S and A, respectively.",
"This is done by solving the following non-convex optimization problem: min V 0 1 AR ( V 0 ) + 2 SA ( V 0 ) + 3 V SP ( V, V 0 ) 789 The AR ( V 0 ) function defined as: AR ( V 0 ) = X ( u,w ) A (1 d ( v 0 u , v 0 w )) is called antonym repel and pushes the refined word vectors of antonymous words to be away from each other.",
"As we already mentioned, we extend the notion of antonymy relations with respect to its more narrow traditional linguistic definition.",
"We consider that two entities in a given ontology are antonymous if they have not been explicitly stated as equivalent, in the sense of a logical assertion or a synonymy relation found in a semantic lexicon.",
"The SA ( V 0 ) function defined as: SA ( V 0 ) = X ( u,w ) S d ( v 0 u , v 0 w ) is called synonym attract and brings closer the transformed word vectors of synonyms.",
"In order to extract synonymy information we search for paraphrases in semantic lexicons.",
"Concretely, let 1 = { word 11 , word 12 , . . . , word 1 m } , 2 = { word 21 , word 22 , . . . , word 2 n } be the textual information of two entities from different ontologies.",
"If the combination { word 1 i , word 2 j } or { word 2 j , word 1 i } for some i { 1 , . . . , m } and j { 1 , . . . , n } appears as a paraphrase in any semantic lexicon then we add the synonymy information ( u, w ) in the set S of synonymy constraints.",
"forces the refined vector space to reflect the original word-vector distances.",
"N ( i ) is the set of words that lie within distance from the i -th word vector in the original vector-space.",
"The experiments show that the value of does not affect signifi-cantly the performance of the whole algorithm, so for computational efficiency we fix it to = 0 .",
"05 .",
"We minimize the objective function with stochastic gradient descent (SGD).",
"We use as a convergence criterion the norm of the gradient.",
"We continue updating the model until this is smaller than 10 5 .",
"In our experiments we typically observe convergence with less than 25 iterations.",
"As before, let V 0 be the refined word vectors and 1 = { word 11 , word 12 , . . . , word 1 m } , 2 = { word 21 , word 22 , . . . , word 2 n } be the textual information that describes two entities from different ontologies.",
"The textual information of an entity can be extracted from different sources, such as the entity's name, label, comments, etc.",
"We replace the appearance of a word with its refined word vector.",
"Hence, we end up with two sets of word vectors Q and S , respectively.",
"In order to do the matching of the entities of two ontologies we use a semantic distance over the entities' representations, here the set of word vectors associated with each entity.",
"There have been many ways to compute the semantic similarity of two word sets, such as the Word Moving Distance (Kusner et al., 2015) and the Dual Embedding Space Model (DESM) (Nal-isnick et al., 2016).",
"We will base our semantic distance on a slight variation of the DESM similarity metric.",
"Our metric computes the distance of two sets of word vectors Q and S as follows: ( Q, S ) = 1 | Q | X q i Q d ( q i , S ) (2) where S = 1 | S | P s j S s j k s j k is the normalised average of the word embeddings that constitute the set of words S .",
"Hence, one of the word vectors' sets is represented by the centroid of its normalized vectors.",
"The overall set-to-set distance is the normalized average of the cosine distance d between the computed centroid and the other's set word vectors.",
"A first observation is that the introduced distance is not symmetric.",
"Ideally, we would expect the semantic distance of two word sets to be irrelevant of the order of the inputs.",
"To make it symmetric, we redefine the distance between two sets of word vectors as: dis ( 1 , 2 ) = max( ( Q, S ) , ( S, Q )) (3) It is important to note that dis ( 1 , 2 ) is not a proper distance metric as it does not satisfy the triangle inequality property.",
"Despite this fact, it has proved to work extremely well on all the ontology matching scenarios.",
"Similar to the work in (Xiang et al., 2015) we use the extension of the Stable Marriage Assignment",
"problem to unequal sets (Gale and Shapley, 1962; McVitie and Wilson, 1970).",
"The stable marriage algorithm computes one-to-one mappings based on a preference m n matrix, where m and n is the number of entities in ontologies O and O 0 , respectively.",
"Note that the violation of the triangle inequality by our semantic distance (equation 3) is not an impediment to the Stable Marriage algorithm (Gale and Shapley, 1962).",
"The majority of the ontology matching systems produce equivalence mappings with cardinality one-to-one.",
"Hence, one entity e in ontology O can be mapped to at most one entity in e 0 in O 0 and vice versa.",
"According to a recent review (Anam et al., 2015) only two out of almost twenty ontology matching systems provide solutions to detect many-to-many mappings.",
"However, ontology designers focus on different degrees of granularity, so it is expected that one entity from some ontology can correspond to more than one entities in another ontology and vice-verca.",
"To address this problem, we present an algorithm that extends the one-to-one mappings of the previous step to many-to-many.",
"The basic idea is that some alignments that were omitted by the Stable Marriage solution were very close to the optimal alignment and they should also be included in the final alignment set.",
"However, despite the use of refined word vectors, we cannot completely avoid the problems that come from the semantic similarity and conceptual association coalescence.",
"The solution of this problem comes from the observation that we can add the constraint that the mapping should be extended only in the case that the new entity that will be added will share a subsumption relation with the existing one.",
"Below we give a more formal definition of what we will call an (cid:15) -optimal mapping between two entities e and e 0 that belong to two different ontologies O and O 0 respectively.",
"Definition 1 Let e e 0 be the optimal mapping produced by the Stable Marriage Solution from the entity e O to the entity e 0 O 0 , where O and O 0 are two different ontologies.",
"Let e e 00 be another mapping, where e 00 O 0 .",
"Given an (cid:15) > 0 , we call the mapping e e 00 (cid:15) -optimal with respect to the mapping e e 0 if and only if the following two hold: | dis ( 1 , 2 ) dis ( 1 , 3 ) | < (cid:15) , where 1 , 2 , 3 is the textual information of entities e , e 0 and e 00 , respectively.",
"e 0 and e 00 should be logically related with a subsumption relation.",
"Equivalently, there must be either a logical assertion that e 0 is subclass of e 00 or e 00 is subclass of e 0 .",
"The subsumption restriction requires that the extended alignments share a taxonomic relation, in order to avoid matchings between entities that are conceptually associated.",
"We iteratively search for (cid:15) -optimal mappings according to the algorithm 1 to extend the established one-to-one mappings to many-to-many.",
"For efficiency reasons, we do not check all the entities, but only the r closest entities according to the dis ( 1 , 2 ) distance.",
"As a final step, we iteratively pass through all the produced alignments and we discard those with dis ( 1 , 2 ) greater than a hyperparameter value thres .",
"In this section, we present the experiments we performed on the OAEI conference dataset and in one real word alignment scenario between the Schema.org and DBpedia ontologies.",
"One of the main problems that we have encountered with the comparative evaluation of our algorithm is that even though numerous ontology matching algorithms exist, for only a very small portion of them either the respective software or the system's out-791 put is publicly available.",
"To the best of our knowledge, among all the systems tested in the conference dataset only AML (Cruz et al., 2009) and LogMap (Jimenez-Ruiz and Grau, 2011) are publicly available.",
"As it happens these are two of the state-of-the-art systems.",
"Moreover, AML offers solutions to detect many-to-many alignments (Faria et al., 2015) and, thus, constitutes a competitive baseline against which we will compare the performance of extendMap which also provides many-to-many alignments.",
"When training to refine the vector representations an unbalanced proportion of synonymy and antonymy constraints sets can cause problems; the set with the lower cardinality will have limited impact on the final word representations.",
"To overcome this problem, we run an additional step of the counter-fitting procedure, using only a small random subset of the supernumerary constraints and all constraints of the minority set.",
"We randomly undersample the larger set and reduce its cardinality to that of the smaller set.",
"We call this additional step the recounter-fitting process.",
"To demonstrate the importance of the recounter-fitting process and test the behavior of the pretrained word vectors in the absence of synonymy and/or antonymy relations, we have conducted additional experiments which we also present.",
"In all of our experiments we have applied the counter-fitting process upon the Paragram-SL999 word vectors provided by Wieting et al. (2015).",
"With respect to the textual information extracted for each entity, we have only used the entity's ID (rdf:ID).",
"To estimate the precision, recall and F 1 measure of all the systems, that we consider for testing, and check for the statistical significance of the results we use an approximate randomization test with 1048576 shuffles, as described in Yeh (2000).",
"Let 1 = { word 11 , word 12 , . . . , word 1 m } , 2 = { word 21 , word 22 , . . . , word 2 n } be the textual information that accompanies two entities from different ontologies.",
"We extracted the synonymy and antonymy constraints that we used in the experiments from the following semantic lexicons: WordNet: a well known lexical database for the English language (Miller, 1995).",
"In our experiments we did not use WordNet synonyms.",
"Instead, we have included WordNet antonymy pairs together with the antonymy relations extracted by the ontologies.",
"The strategy that we have followed in order to create the WordNet's antonymy pairs is that every two words with antonymous word senses, we have considered them as antonyms.",
"PPDB 2.0: the latest release of the Paraphrase Database (Pavlick et al., 2015).",
"We have used this database in two different ways.",
"We have used the largest available single-token terms (XXXL version) in the database and we have extracted the Equivalence relations as synonyms, and the Exclusion relations as antonyms.",
"Additionally, we have searched the whole XXXL version of PPDB for paraphrases based on the words appeared in two entities from different ontologies.",
"Namely, our strategy was the following: If the pair ( word 1 i , word 2 j ) or the pair ( word 2 j , word 1 i ) appeared on the PPDB and their type of relation was not Exclusion , we considered it as synonym.",
"WikiSynonyms: a semantic lexicon which is built by exploiting the Wikipedia redirects to discover terms that are mostly synonymous (Dakka and Ipeirotis, 2008).",
"In our experiments we have used it only on the Schema.org 1 DBpedia 2 scenario.",
"Our strategy was the following: we search if there exist synonyms in the WikiSynonyms for the 1 and 2 .",
"If this is the case, we extract them and we stop there.",
"In the opposite case we extract the synonyms for each word 1 i and word 2 j .",
"We tuned the hyperparameters on a set of 100 alignments which we generated by randomly sampling the synonyms and antonyms extracted from WordNet and PPDB.",
"We chose the vocabulary of the 100 alignments so that it is disjoint to the vocabulary that we used in the alignment experiments, described in the evaluation benchmarks, in order to avoid any information leakage from training to testing.",
"We tuned to maximize the F 1 measure.",
"In particular, we did a coarse grid search over a parameter space for 1 , 2 , 3 , r , (cid:15) and thres .",
"We considered 1 , 2 [0 . 35 , 0 . 45] and 3 [0 . 1 , 0 . 2] with common step 0 .",
"01 , r [1 , 10] with step 1 , (cid:15) [0 . 01 , 0 . 1] with step 0 .",
"01 and thres [0 . 3 , 0 . 7] with step 0 .",
"05 .",
"We trained for 25 epochs for each hyperparameter using SGD.",
"The best values were the following: 1 = 0 .",
"4 , 2 = 0 .",
"4 , 3 = 0 .",
"1 , r = 8 , (cid:15) = 0 .",
"07 and thres = 0 .",
"5 .",
"We used the selected configuration on all the alignment scenarios described below.",
"One of our evaluation benchmarks comes from the Ontology Alignment Evaluation Initiative (OAEI), which organizes annual campaigns for evaluating ontology matching systems.",
"The external to OAEI evaluation benchmark comes from the provided alignments between the Schema.org and the DBpedia ontologies.",
"We provide some further details for each dataset below: OAEI Conference Dataset: It contains 7 ontologies addressing the same domain, namely the conference organization.",
"These ontologies are suitable for ontology matching task because of their heterogeneous character of origin.",
"The overall performance (micro-precision, micro-recall, micro-F1) of the systems is tested upon 21 different test cases.",
"Specifically, we summed up the individual true positives, false positives and false negatives based on the system results for the different ontology matching tasks and, in the next step, we computed the performance metrics.",
"The original reference alignment is not closed under the alignment relation, so the transitive closure should be computed before proceeding on the evaluation of the systems.",
"Schema.org DBpedia Alignment: It corresponds to the incomplete mapping of the Schema.org and DBpedia ontologies.",
"Schema.org is a collaborative, community activity with a mission to create, maintain, and promote schemas for structured data on the Internet, on web pages, in email messages, and beyond.",
"On the other hand, DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web.",
"This alignment corresponds to a real case scenario between two of the most widely used ontologies in the web today.",
"All the systems presented in the Conference dataset experiments (Table 1) fall into the category of feature engineering.",
"CroMatch (Gulic et al., 2016), AML (Cruz et al., 2009), XMap (Djeddi and Khadir, 2010) perform ontology matching based on heuristic methods that rely on aggregation functions.",
"LogMap and LogMapBio (Jimenez-Ruiz and Grau, 2011) use logic-based reasoning over the extracted features and cast the ontology matching to a satisfiability problem.",
"Table 1 shows the performance of our algorithm compared to the five top performing systems on the Conference 2016 benchmark, according to the results published in OAEI 3 .",
"DeepAligment achieves the highest microF 1 measure and the highest recall.",
"We were able to perform statistical significance test only for the two systems that were publicly available.",
"DeepAlignment is significantly better than both of them with a p value 0 .",
"05 .",
"In order to explore the performance effect of the many-to-many mappings that DeepAlignment produces we also did experiments where our extendMap algorithm was not used, thus generating only one-to-one alignments.",
"We give these results under the DeepAlignment listing.",
"It can be seen that DeepAlignment achieves the same level of recall as the state-of-the-art systems and this with no feature engineering.",
"When we compare the performance of DeepAlignment and DeepAlignment we see that the use of extendMap generates correct many-to-many alignments and thus it does not produce large numbers of false positives.",
"In any case, however, we retain a small precision which indicates a semantic similarity and conceptual association coalescence.",
"We perform additional experiments to investigate the importance of the counter-fitting step, which are summarized in Table",
"2. In all of these experiments, we have applied the extendMap algorithm.",
"The last row of Table 2, corresponds to the best result reported in Table",
"1. The first row 3 http://oaei.ontologymatching.org/ 2016/ 793 gives the results of executing the algorithm without the counter-fitting process, just by providing the Paragram-SL999 word vectors.",
"The results support the importance of the counter-fitting process, which succeeds in tailoring the word embeddings to the ontology matching task.",
"By injecting only antonymy information (second row), we observe an increase in precision, but a decrease in recall.",
"This behavior is due to the fact that the antonym repel factor imposes an orthogonality constraint to the word vectors, leading to higher values of the dis distance.",
"In absence of synonymy information, the majority of words tend to become antonymous.",
"The third row of Table 2 gives the performance when we also include synonyms extracted from PPDB but no antonymy information.",
"We can see that this leads to a large increase of all the recorded performance metrics.",
"Finally, we also include antonymy information only from the Cmt and the Conference ontologies found in the Conference dataset.",
"This has two effects: an increase in recall, but a decrease in precision.",
"This can be explained by the fact that even though all ontologies describe the same domain the description granularity provided by each of them is not capable of giving all the antonymy relations needed to provide more refined alignments.",
"Table 3 summarizes the obtained results from the matching of the Schema.org and DBpedia ontologies.",
"The fact that the alignment is incomplete restricts us on testing the performance only on the recall.",
"To make the comparison as fair as possible, we did not apply the extendMap algorithm.",
"We should highlight that we have applied the recounter-fitting process because the synonyms that we have extracted from the PPDB and WikiSynonyms were very few compared to the constructed antonyms.",
"The results of the LogMap system show a quite similar behavior with the experiments conducted in the conference dataset.",
"However the recall of AML is zero.",
"It System Recall DeepAlignment 0.82 LogMap 0.5 AML 0 Table 3: Results on aligning Schema.org and DBpedia ontologies.",
"discovers none of the available alignments even though it manages to recall other quite reasonable matchings, which, however, are not included in the ground truth.",
"According to our understanding, this might be an indication of the absence of domain transferability of the extracted features as well as of the implemented metrics.",
"We summarize in Ta-Parameters Recall Recounter-fitting Synonyms Antonyms No No No 0.71 No No Yes 0.76 No Yes No 0.84 No Yes Yes 0.76 Yes Yes Restricted 0.82 Table 4: Experiments on aligning Schema.org and DBpedia ontologies.",
"ble 4 the results of the experiments we did on the two domains to study the effect of counter-fitting and recounter-fitting.",
"As we can see, even without the counter-fitting, the semantic embeddings show quite good results.",
"This provides evidence on the importance of using representation learning techniques instead of the classical feature engineering choice.",
"By injecting only antonymy information (second row), we observe a different behavior in the recall metric compared to the one presented in Table",
"2. This can be explained by the fact that while the antonym repel factor imposes an orthogonality constraint, its effect is by no means universal to the whole word vector space.",
"Therefore, a misalignment can be pushed far away leaving the space open for a true alignment to be detected.",
"With the addition of the extracted synonyms, we observe an increase of 0 .",
"13 in the recall.",
"However, the insertion of the extracted antonyms leads to lower performance.",
"This shows practically the importance of applying the recounter-fitting process that allows both the synonym attract and the antonym repel factors to affect the word vectors.",
"DeepAlignment vs. initial word vectors.",
"To investigate the impact of the initial pre-trained word vectors on DeepAlignment's performance, we carried out two additional experiments, this time using a set of word2vec vectors (Mikolov et al., 2013b), trained on the Google news dataset 4 .",
"We report and compare the obtained results to the ones produced by the use of Paragram-SL999 vectors in Table 5.",
"In the absence of counter-fitting, Counterfitting WordVectors Conference Schema.orgDBpedia P R Micro-F1 R No word2vec 0.64 0.52 0.58 0.74 No Paragram 0.63 0.55 0.59 0.71 Yes word2vec 0.67 0.75 0.71 0.75 Yes Paragram 0.71 0.80 0.75 0.76 Table 5: Dependency of DeepAlignment's performance on the choice of the initial word vectors 6 .",
"the word2vec vectors achieve better results on the Schema.org DBpedia scenario, however, they exhibit lower performance on the conference dataset.",
"This observation is in accordance with recent studies (Hill et al., 2016a) which show that different word vectors optimization objectives yield representations tailored to different applications and domains.",
"After the application of the counter-fitting process, the use of Paragram-SL999 vectors leads to a better performance.",
"This fact provides additional evidence that word vectors which reflect semantic similarity are better candidates for being further tailored to the ontology matching task.",
"DeepAlignment vs. resources' coverage.",
"The choice and coverage of the different lexical resources may have a determining factor on the performance of DeepAlignment.",
"For that reason, we present in Table 6 a set of experiments where we exclude a part of the synonymy/antonymy relations from the various semantic lexicons.",
"For both the matching scenarios, we experimented with excluding all the antonyms from PPDB and WikiSynonyms.",
"For the conference dataset, we addi-tionaly experimented with including only a subset of PPDB synonyms (50% coverage).",
"Finally, we carried out one experiment where we excluded all the synonymy information extracted from WikiSynonyms for the Schema.org DBpedia scenario.",
"The resulted performance is presented in 4 https://code.google.com/p/word2vec 6 For the Schema.org DBpedia scenario's experiments, the recounter-fitting process has not been applied.",
"the rows 1, 4, 2, 5 of Table 6, respectively.",
"The reported results provide evidence that the greater the coverage of synonyms and antonyms, the greater the performance of DeepAlignment will be.",
"In this paper, we propose the refinement of pretrained word vectors with the purpose of deriving ontological entity descriptions which are tailored to the ontology matching task.",
"The refined word representations are learned so that they incorporate domain knowledge encoded in ontologies as well as knowledge extracted from semantic lexicons.",
"The refinement procedure does not use any explicit information relevant to the ontology matching task making the entity representation task completely unsupervised.",
"We perform ontology matching by applying the Stable Marriage algorithm over the entities' pairwise distances.",
"Our experimental results demonstrate significant performance gains over the state-of-the-art and show a novel way to study the problem of ontology matching under the setting of NLP.",
"We would like to thank the anonymous reviewers for their insightful comments on the paper.",
"This project was supported by the Swiss State Secretariat for Education, Research and Innovation SERI (SERI; contract number 15.0303) through the European Union's Horizon 2020 research and innovation programme (grant agreement No 688203; bIoTope).",
"This paper reflects the authors' view only, and the EU as well as the Swiss Government is not responsible for any use that may be made of the information it contains."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"other",
"other",
"other"
] |
[
"In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions.",
"Researchers hope that models trained on these more challenging datasets will rely less on superficial patterns, and thus be less brittle.",
"However, despite ADC's intuitive appeal, it remains unclear when training on adversarial datasets produces more robust models.",
"In this paper, we conduct a large-scale controlled study focused on question answering, assigning workers at random to compose questions either",
"(i) adversarially (with a model in the loop); or",
"(ii) in the standard fashion (without a model).",
"Across a variety of models and datasets, we find that models trained on adversarial data usually perform better on other adversarial datasets but worse on a diverse collection of out-of-domain evaluation sets.",
"Finally, we provide a qualitative analysis of adversarial (vs standard) data, identifying key differences and offering guidance for future research.",
"1 1 Introduction Across such diverse natural language processing (NLP) tasks as natural language inference (NLI; Poliak et al., 2018; Gururangan et al., 2018), question answering (QA; Kaushik and Lipton, 2018), and sentiment analysis (Kaushik et al., 2020), researchers have discovered that models can succeed on popular benchmarks by exploiting spurious associations that characterize a particular dataset but do not hold more widely.",
"Despite performing well on independent and identically distributed (i.i.d.) data, these models are liable under plausible domain shifts.",
"With the goal of providing more challenging benchmarks that require this stronger form of generalization, an emerging line of research has 1 Data collected during this study is publicly available at https://github.com/facebookresearch/aqa-study.",
"investigated adversarial data collection (ADC), a scheme in which a worker interacts with a model (in real time), attempting to produce examples that elicit incorrect predictions (e.g., Dua et al., 2019; Nie et al., 2020).",
"The hope is that by identifying parts of the input domain where the model fails one might make the model more robust.",
"Researchers have shown that models trained on ADC perform better on such adversarially collected data and that with successive rounds of ADC, crowdworkers are less able to fool the models (Dinan et al., 2019).",
"While adversarial data may indeed provide more challenging benchmarks, the process and its actual benefits vis-a-vis tasks of interest remain poorly understood, raising several key questions:",
"(i) do the resulting models typically generalize better out of distribution compared to standard data collection",
"(SDC)?;",
"(ii) how much can differences between ADC and SDC be attributed to the way workers behave when attempting to fool models, regardless of whether they are successful?",
"and",
"(iii) what is the impact of training models on adversarial data only, versus using it as a data augmentation strategy?",
"In this paper, we conduct a large-scale randomized controlled study to address these questions.",
"Focusing our study on span-based question answering and a variant of the Natural Questions dataset (NQ; Lee et al., 2019; Karpukhin et al., 2020), we work with two popular pretrained transformer architecturesBERT large (Devlin et al., 2019) and ELECTRA large (Clark et al., 2020) each fine-tuned on 23 .",
"1 k examples.",
"To eliminate confounding factors when assessing the impact of ADC, we randomly assign the crowdworkers tasked with generating questions to one of three groups:",
"(i) with an incentive to fool the BERT model;",
"(ii) with an incentive to fool the ELECTRA model; and",
"(iii) a standard, non-adversarial setting (no model in the loop).",
"The pool of contexts is the same for each group and each worker is asked to Figure 1: Platform shown to workers generating questions in the ADC setting.",
"generate five questions for each context that they see.",
"Workers are shown similar instructions (with minimal changes), and paid the same base amount.",
"We fine-tune three models (BERT, RoBERTa, and ELECTRA) on resulting datasets and evaluate them on held-out test sets, adversarial test sets from prior work (Bartolo et al., 2020), and 12 MRQA (Fisch et al., 2019) datasets.",
"For all models, we find that while fine-tuning on adversarial data usually leads to better performance on (previ-ously collected) adversarial data, it typically leads to worse performance on a large, diverse collection of out-of-domain datasets (compared to fine-tuning on standard data).",
"We observe a similar pattern when augmenting the existing dataset with the adversarial data.",
"Results on an extensive collection of out-of-domain evaluation sets suggest that ADC training data does not offer clear benefits vis-`a-vis robustness under distribution shift.",
"To study the differences between adversarial and standard data, we perform a qualitative analysis, categorizing questions based on a taxonomy (Hovy et al., 2000).",
"We notice that more questions in the ADC dataset require numerical reasoning compared to the SDC sample.",
"These qualitative insights may offer additional guidance to future researchers.",
"In an early example of model-in-the-loop data collection, Zweig and Burges (2012) use n -gram language",
"language models to suggest candidate incorrect answers for a fill-in-the-blank task.",
"Richardson et al. (2013) suggested ADC for QA as proposed future work, speculating that it might challenge state-of-the-art models.",
"In the Build It Break It, The Language Edition shared task (Ettinger et al., 2017), teams worked as builders (training models) and breakers (creating challenging examples for subsequent training) for sentiment analysis and QA-SRL.",
"Research on ADC has picked up recently, with Chen et al. (2019) tasking crowdworkers to construct multiple-choice questions to fool a BERT model and Wallace et al. (2019) employing Quizbowl community members to write Jeopardy-style questions to compete against QA models.",
"Zhang et al. (2018) automatically generated questions from news articles, keeping only those questions that were incorrectly answered by a QA model.",
"Dua et al. (2019) and Dasigi et al. (2019) required crowdworkers to submit only questions that QA models answered incorrectly.",
"To construct FEVER 2 .",
"0 (Thorne et al., 2019), crowdworkers were required to fool a fact-verification system trained on the FEVER (Thorne et al., 2018) dataset.",
"Some works explore ADC over multiple rounds, with adversarial data from one round used to train models in the subsequent round.",
"Yang et al. (2018b) ask workers to generate challenging datasets working first as adversaries and later as collaborators.",
"Dinan et al. (2019) build on their work, employing ADC to address offensive language identification.",
"They find that over successive rounds of training, models trained on ADC data are harder for humans to fool than those trained on standard data.",
"Nie et al. (2020) applied ADC for an NLI task over three rounds, finding that training for more rounds improves model performance on adversarial data, and observing improvements on the original evaluations set when training on a mixture of original and adversarial training data.",
"Williams et al. (2020) conducted an error analysis of model predictions on the datasets collected by Nie et al. (2020).",
"Bartolo et al. (2020) studied the empirical efficacy of ADC for SQuAD (Rajpurkar et al., 2016), observing improved performance on adversarial test sets but noting that trends vary depending on the models used to collect data and to train.",
"Previously, Lowell et al. (2019) observed similar issues in active learning, when the models used to acquire data and for subsequent training differ.",
"Yang et al. (2018a); Zellers et al. (2018, 2019) first collect datasets and then filter examples based on predictions from a model.",
"Paperno et al. (2016) apply a similar procedure to generate a language modeling dataset (LAMBADA).",
"Kaushik et al. (2020, 2021) collect counterfactually augmented data (CAD) by asking crowdworkers to edit existing documents to make counterfactual labels applicable, showing that models trained on CAD generalize better out-of-domain.",
"Absent further assumptions, learning classifiers robust to distribution shift is impossible (Ben-David et al., 2010).",
"While few NLP papers on the matter make their assumptions explicit, they typically proceed under the implicit assumptions that the labeling function is deterministic (there is one right answer), and that covariate shift (Shimodaira, 2000) applies (the labeling function p ( y | x ) is invariant across domains).",
"Note that neither condition is generally true of prediction problems.",
"For example, faced with label shift (Scholkopf et al., 2012; Lipton et al., 2018) p ( y | x ) can change across distributions, requiring one to adapt the predictor to each environment.",
"In our study of ADC for QA, each crowdworker is shown a short passage and asked to create 5 questions and highlight answers (spans in the passage, see Fig. 1).",
"We provide all workers with the same base pay and for those assigned to ADC, pay out an additional bonus for each question that fools the QA model.",
"Finally, we field a different set of workers to validate the generated examples.",
"Context passages For context passages, we use the first 100 words of Wikipedia articles.",
"Truncating the articles keeps the task of generating questions from growing unwieldy.",
"These segments typically contain an overview, providing ample material for factoid questions.",
"We restrict the pool of candidate contexts by leveraging a variant of the Natural Questions dataset (Kwiatkowski et al., 2019; Lee et al., 2019).",
"We first keep only a subset of 23 .",
"1 k question/answer pairs for which the context passages are the first 100 words of Wikipedia articles 2 .",
"From these passages, we sample 10 k at random for our study.",
"Models in the loop We use BERT large (Devlin et al., 2019) and ELECTRA large (Clark et al., 2020) models as our adversarial models in the loop, using the implementations provided by Wolf et al. (2020).",
"We fine-tune these models for span-based question-answering, using the 23 .",
"1 k training examples (subsampled previously) for 20 epochs, with early-stopping based on word-overlap F1 3 over the validation set.",
"Our BERT model achieves an EM score of 73 .",
"1 and an F1 score of 80 .",
"5 on an i.i.d. validation set.",
"The ELECTRA model performs slightly better, obtaining an 74 .",
"2 EM and 81 .",
"2 F1 on the same set.",
"Crowdsourcing protocol We build our crowdsourcing platform on the Dynabench interface (Kiela et al., 2021) and use Amazon's Mechanical Turk to recruit workers to write questions.",
"To ensure high quality, we restricted the pool to U.S. residents who had already completed at least 1000 HITs and had over 98% HIT approval rate.",
"For each task, we conducted several pilot studies to gather feedback from crowdworkers on the task and interface.",
"We identified median time taken by workers to complete the task in our pilot studies and used that to design the incentive structure for the main task.",
"We also conducted multiple studies with different variants of instructions to observe trends in the quality of questions and refined our instructions based on feedback from crowdworkers.",
"Feedback from the pilots also guided improvements to 2 We used the data prepared by Karpukhin et al. (2020), available at https://www.github.com/facebookresearch/DPR.",
"3 Word-overlap F1 and Exact Match (EM) metrics introduced in Rajpurkar et al. (2016) are commonly used to evaluate performance of passage-based QA systems, where the correct answer is a span in the given passage.",
"our crowdsourcing interface.",
"In total, 984 workers took part in the study, with 741 creating questions.",
"In our final study, we randomly assigned workers to generate questions in the following ways:",
"(i) to fool the BERT baseline;",
"(ii) to fool the ELECTRA baseline; or",
"(iii) without a model in the loop.",
"Before beginning the task, each worker completes an onboarding process to familiarize them with the platform.",
"We present the same set of passages to workers regardless of which group they are assigned to, tasking them with generating 5 questions for each passage.",
"Incentive structure During our pilot studies, we found that workers spend 2 3 minutes to generate 5 questions.",
"We provide workers with the same base pay $0 .",
"75 per HIT(to ensure compensation at a $15 /hour rate).",
"For tasks involving a model in the loop, we define a model prediction to be incorrect if its F1 score is less than 40% , following the threshold set by Bartolo et al. (2020).",
"Workers tasked with fooling the model receive bonus pay of $0 .",
"15 for every question that leads to an incorrect model prediction.",
"This way, a worker can double their pay if all 5 of their generated questions induce incorrect model predictions.",
"Quality control Upon completion of each batch of our data collection process, we presented 20% of the collected questions to a fourth group of crowdworkers who were tasked with validating whether the questions were answerable and the answers were correctly labeled.",
"In addition, we manually verified a small fraction of the collected question-answer pairs.",
"If validations of at least 20% of the examples generated by a particular worker were incorrect, their work was discarded in its entirety.",
"The entire process, including the pilot studies cost $50 k and spanned a period of seven months.",
"Through this process, we collected over 150 k question-answer pairs corresponding to the 10 k contexts ( 50 k from each group) but the final datasets are much smaller, as we explain below.",
"Our study allows us to answer three questions:",
"(i) how well do models fine-tuned on ADC data generalize to unseen distributions compared to fine-tuning on SDC?",
"(ii) Among the differences between ADC and SDC, how many are due to workers trying to fool the model regardless of whether they are successful?",
"and",
"(iii) what is the impact of training on adversarial data only versus using it as a data augmentation strategy?",
"Datasets For both BERT and ELECTRA, we first identify contexts for which at least one question elicited an incorrect model prediction.",
"Note that this set of contexts is different for BERT and ELECTRA.",
"For each such context c , we identify the number of questions k c (out of 5) that successfully fooled the model.",
"We then create 3 datasets per model by, for each context,",
"(i) choosing precisely those k c questions that fooled the model (BERT fooled and ELECTRA fooled );",
"(ii) randomly choosing k c questions (out of 5 ) from ADC data without replacement (BERT random and ELECTRA random )regardless of whether they fooled the model; and",
"(iii) randomly choosing k c questions (out of 5 ) from the SDC data without replacement.",
"Thus, we create 6 datasets, where all 3 BERT datasets have the same number of questions per context (and 11 . 3 k total training examples), while all 3 ELECTRA datasets likewise share the same number of questions per context (and 14 . 7 k total training examples).",
"See Table 1 for details on the number of passages and question-answer pairs used in the different splits.",
"Models For our empirical analysis, we fine-tune BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and ELECTRA (Clark et al., 2020) models on all six datasets generated as part of our study (four datasets via ADC: BERT fooled , BERT random , ELECTRA fooled , ELECTRA random , and the two datasets via SDC).",
"We also fine-tune these models after augmenting the original data to collected datasets.",
"We report the means and standard deviations (in subscript) of EM and F1 scores following 10 runs of each experiment.",
"Models fine-tuned on all ADC datasets typically perform better on their held-out test sets than those trained on SDC data and vice-versa (Table 2 and Appendix Table 5).",
"RoBERTa fine-tuned on the BERT fooled training set obtains EM and F1 scores of 49 .",
"2 and 71 .",
"2 , respectively, on the BERT fooled test set, outperforming Evaluationset BERT fooled BERT random SDC OriginalDev.",
"RoBERTa models fine-tuned on BERT random (EM: 48 . 0 , F1: 69 . 8 ) and SDC (EM: 42 . 0 , F1: 65 . 3 ).",
"Performance on the original dev set (Karpukhin et al., 2020) is generally comparable across all models.",
"Out-of-domain generalization to adversarial data We evaluate these models on adversarial test sets constructed with BiDAF (D BiDAF ), BERT (DBERT ) and RoBERTa (D RoBERTa ) in the loop (Bar-tolo et al., 2020).",
"Prior work suggests that training on ADC data leads to models that perform better on similarly constructed adversarial evaluation sets.",
"Both BERT and RoBERTa models fine-tuned on adversarial data generally outperform models fine-tuned on SDC data (or when either datasets are augmented to the original data) on all three evaluation sets (Table 3 and Appendix Table 6).",
"A RoBERTa model fine-tuned on BERT fooled outperforms a RoBERTa model fine-tuned on SDC by 9 .",
"1 , 9 .",
"3 , and 6 .",
"2 EM points on D RoBERTa , DBERT , and D BiDAF , respectively.",
"We observe similar trends on ELECTRA models fine-tuned on ADC data versus SDC data, but these gains disappear when the same models are finetuned on augmented data.",
"For instance, while ELECTRA fine-tuned on BERT random obtains an EM score of 14 .",
"8 on D RoBERTa , outperforming an ELECTRA fine-tuned on SDC data by 3 pts, the difference is no longer significant when respective models are fine-tuned after original data is augmented to these datasets.",
"ELECTRA models fine-tuned on ADC data with ELECTRA in the loop perform no better than those trained on SDC.",
"Fine-tuning ELECTRA on SDC augmented to original data leads to an 1 pt improvement on both metrics compared to augmenting ADC.",
"Overall, we find that models fine-tuned on ADC data typically generalize better to out-of-domain adversarial test sets than models fine-tuned on SDC data, confirming the findings by Dinan et al. (2019).",
"Out-of-domain generalization to MRQA We further evaluate these models on 12 out-of-domain datasets used in the 2019 MRQA shared task 4 (Ta-ble 4 and Appendix Table 7).",
"5 Notably, for BERT, fine-tuning on SDC data leads to significantly better performance (as compared to fine-tuning on 4 The MRQA 2019 shared task includes HotpotQA (Yang et al., 2018a), Natural Questions (Kwiatkowski et al., 2019), SearchQA (Dunn et al., 2017), SQuAD (Rajpurkar et al., 2016), TriviaQA (Joshi et al., 2017), BioASQ (Tsatsaronis et al., 2015), DROP (Dua et al., 2019), DuoRC (Saha et al., 2018), RelationExtraction (Levy et al., 2017), RACE (Lai et al., 2017), and TextbookQA (Kembhavi et al., 2017).",
"5 Interestingly, RoBERTa appears to perform better compared to BERT and ELECTRA.",
"Prior works have hypothesized that the bigger size and increased diversity of the pretraining corpus of RoBERTa (compared to those of BERT and ELECTRA) might somehow be responsible for RoBERTa's better out-of-domain generalization, (Baevski et al., 2019; Hendrycks et al., 2020; Tu et al., 2020).",
"ADC data collected with BERT) on 9 out of 12 MRQA datasets, with gains of more than 10 EM pts on 6 of them.",
"On BioASQ, BERT fine-tuned on BERT fooled obtains EM and F1 scores of 23 .",
"5 and 30 .",
"3 , respectively.",
"By comparison, fine-tuning on SDC data yields markedly higher EM and F1 scores of 35 .",
"1 and 55 .",
"7 , respectively.",
"Similar trends hold across models and datasets.",
"Interestingly, ADC fine-tuning often improves performance on DROP compared to SDC.",
"For instance, RoBERTa finetuned on ELECTRA random outperforms RoBERTa fine-tuned on SDC by 7 pts.",
"Note that DROP itself was adversarially constructed.",
"On Natural Questions, models fine-tuned on ADC data generally perform comparably to those fine-tuned on SDC data.",
"RoBERTa fine-tuned on BERT random obtains EM and F1 scores of 48 .",
"1 and 62 .",
"6 , respectively, whereas RoBERTa fine-tuned on SDC data obtains scores of 47 .",
"9 and 61 .",
"7 , respectively.",
"It is worth noting that passages sourced to construct both ADC and SDC datasets come from the Natural Questions dataset, which could be one reason why models fine-tuned on ADC datasets perform similar to those fine-tuned on SDC datasets when evaluated on Natural Questions.",
"BERT random and ELECTRA random typically outperform models fine-tuned on BERT fooled and ELECTRA fooled , respectively, on adversarial test data collected in prior work (Bartolo et al., 2020), as well as on MRQA.",
"Similar observation can be made when the ADC data is augmented with the original training data.",
"These trends suggest that the ADC process (regardless of the outcome) explains our results more than successfully fooling a model.",
"Furthermore, models fine-tuned only on SDC data tend to outperform ADC-only fine-tuned models; however, following augmentation, ADC fine-tuning achieves comparable performance on more datasets than before, showcasing generalization following augmentation.",
"Notice that augmenting ADC data to original data may not always help.",
"BERT fine-tuned on original 23 .",
"1 k examples achieves an EM 11 .",
"3 on SearchQA.",
"When fine-tuned on BERT fooled augmented to the original data, this drops to 8 .",
"7 , and when fine-tuned on BERT random augmented to the original data, it drops to 11 .",
"2 .",
"Fine-tuning on SDC augmented to the original data, however, results in EM of 13 .",
"6 .",
"Finally, we perform a qualitative analysis over the collected data, revealing profound differences with models in (versus out of) the loop.",
"Recall that be-Finetunedmodel: BERT large Evaluationset BioASQ DROP DuoRC RelationExtraction RACE TextbookQA Trainingset EM F1 EM F1 EM F1 EM F1 EM F1 EM F1 Original(23.1k) 19 .",
"To begin, we analyze 100 questions from each dataset and categorize them using the taxonomy introduced by Hovy et al. (2000).",
"6 We also look at 6 This taxonomy can be accessed at https://www.isi.edu/nat ural-language/projects/webclopedia/Taxonomy/taxonomy who what when where which how 050100150200250300350400450500550600",
"the first word of the wh -type questions in each dev set (Fig. 3) and observe key qualitative differences between data via ADC and SDC for both models.",
"In case of ADC with BERT (and associated SDC), while we observe that most questions in the dev sets start with what , ADC has a higher proportion compared to SDC ( 587 in BERT fooled and 492 in BERT random versus 416 in SDC).",
"Furthermore, we notice that compared to BERT fooled dev set, SDC has more when ( 148 ) and who -type ( 220 ) questions, the answers to which typically refer to dates, places and people (or organizations), respectively.",
"This is also reflected in the taxonomy categorization.",
"Interestingly, the BERT random dev set has more when and who -type questions than BERT fooled ( 103 and 182 versus 50 and 159 , respec-tively).",
"This indicates that the BERT model could have been better at answering questions related to dates and people (or organizations), which could have further incentivized workers not to generate toplevel.html such questions upon observing these patterns.",
"Similarly, in the 100 -question samples, we find that a larger proportion of questions in ADC are categorized as requiring numerical reasoning ( 11 and 18 in BERT fooled and BERT random , respectively) compared to SDC ( 7 ).",
"It is possible that the model's performance on numerical reasoning (as also demonstrated by its lower performance on DROP compared to fine-tuning on ADC or SDC) would have incentivized workers to generate more questions requiring numerical reasoning and as a result, skewed the distribution towards such questions.",
"Similarly, with ELECTRA, we observe that what -type questions constitute most of the questions in the development sets for both ADC and SDC, although data collected via ADC has a higher proportion of these ( 641 in ELECTRA fooled and 619 in ELECTRA random versus 542 in SDC).",
"We also notice more how -type questions in ADC ( 126 in ELECTRA random ) vs 101 in SDC, and that the SDC sample has more questions that relate to dates ( 223 ) but the number is lower in the ADC samples ( 157 and 86 in ELECTRA random and ELECTRA fooled , respectively).",
"As with BERT, the ELECTRA model was likely better at identifying answers about dates or years which could have further incentivized workers to generate less questions of such types.",
"However, unlike with BERT, we observe that the ELECTRA ADC and SDC 100 -question samples contain similar numbers of questions involving numerical answers ( 8 , 9 and 10 in ELECTRA fooled , ELECTRA random and SDC respectively).",
"Lastly, despite explicit instructions not to generate questions about passage structure (Fig. 1), a small number of workers nevertheless created such questions.",
"For instance, one worker wrote, What is the number in the passage that is one digit less than the largest number in the passage? While most such questions were discarded during validation, some of these are present in the final data.",
"Overall, we notice considerable differences between ADC and SDC data, particularly vis-a-vis what kind of questions workers generate.",
"Our qualitative analysis offers additional insights that suggest that ADC would skew the distribution of questions workers create, as the incentives align with quickly creating more questions that can fool the model.",
"This is reflected in all our ADC datasets.",
"One remedy could be to provide workers with initial questions, asking them to edit those questions to elicit incorrect model predictions.",
"Similar strategies were employed in (Ettinger et al., 2017), where breakers minimally edited original data to elicit incorrect predictions from the models built by builders , as well as in recently introduced adversarial benchmarks for sentiment analysis (Potts et al., 2020).",
"In this paper, we demonstrated that across a variety of models and datasets, training on adversarial data leads to better performance on evaluation sets created in a similar fashion, but tends to yield worse performance on out-of-domain evaluation sets not created adversarially.",
"Additionally, our results suggest that the ADC process (regardless of the outcome) might matter more than successfully fooling a model.",
"We also identify key qualitative differences between data generated via ADC and SDC, particularly the kinds of questions created.",
"controlled setting, offering insights that can guide future research in this direction.",
"These findings are particularly important given that ADC is more time-consuming and expensive than SDC, with workers requiring additional financial incentives.",
"We believe that a remedy to these issues could be to ask workers to edit questions rather than to generate them.",
"In the future, we would like to extend this study and investigate the efficacy of various constraints on question creation, and the role of other factors such as domain complexity, passage length, and incentive structure, among others.",
"The authors thank Max Bartolo, Robin Jia, Tanya Marwah, Sanket Vaibhav Mehta, Sina Fazelpour, Kundan Krishna, Shantanu Gupta, Simran Kaur, and Aishwarya Kamath for their valuable feedback on the crowdsourcing platform and the paper.",
"The passages in our datasets are sourced from the datasets released by Karpukhin et al. (2020) under a Creative Commons License.",
"As described in main text, we designed our incentive structure to ensure that crowdworkers were paid $15 /hour, which is twice the US federal minimum wage.",
"Our datasets focus on the English language, and are not collected for the purpose of designing NLP applications but to conduct a human study.",
"We share our dataset to allow the community to replicate our findings and do not foresee any risks associated with the use of this data."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"method",
"objective",
"other",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"Recent work on the interpretability of deep neural language models has concluded that many properties of natural language syntax are encoded in their representational spaces.",
"However, such studies often suffer from limited scope by focusing on a single language and a single linguistic formalism.",
"In this study, we aim to investigate the extent to which the semblance of syntactic structure captured by language models adheres to a surface-syntactic or deep syntactic style of analysis, and whether the patterns are consistent across different languages.",
"We apply a probe for extracting directed dependency trees to BERT and ELMo models trained on 13 different languages, probing for two different syntactic annotation styles: Universal Dependencies (UD), prioritizing deep syntactic relations, and Surface-Syntactic Universal Dependencies (SUD), focusing on surface structure.",
"We find that both models exhibit a preference for UD over SUD with interesting variations across languages and layers and that the strength of this preference is correlated with differences in tree shape.",
"Recent work on interpretability in NLP has led to the consensus that deep neural language models trained on large, unannotated datasets manage to encode various aspects of syntax as a byproduct of the training objective.",
"Probing approaches applied to models like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) have demonstrated that one can decode various linguistic properties such as part-of-speech categories, dependency relations, and named-entity types directly from the internal hidden states of a pretrained model (Tenney et al., 2019b,b; Peters et al., 2018b).",
"Another line of work has tried to tie cognitive measurements or theories of human linguistic processing to the machinations of language models, often establishing strong parallels between the two (Prasad et al., 2019; Abnar et al., 2019; Gauthier and Levy, 2019).",
"As is the case for NLP in general, English has served as the de facto testing ground for much of this work, with other languages often appearing as an afterthought.",
"However, despite its ubiquity in the NLP literature, English is generally considered to be atypical across many typological dimensions.",
"Furthermore, the tendency of interpreting NLP models with respect to existing, canonical datasets often comes with the danger of conflat-ing the theory-driven annotation therein with sci-entific fact.",
"One can observe this to an extent with the Universal Dependencies (UD) project (Nivre et al., 2016), which aims to collect syntactic annotation for a large number of languages.",
"Many interpretability studies have taken UD as a basis for training and evaluating probes, but often fail to mention that UD, like all annotation schemes, is built upon specific theoretical assumptions, which may not be universally accepted.",
"Our research questions start from these concerns.",
"When probing language models for syntactic dependency structure, is UD with its emphasis on syntactic relations between content words really the best fit?",
"Or is the representational structure of such models better explained by a scheme that is more oriented towards surface structure, such as the recently proposed Surface-Syntactic Universal Dependencies (SUD) (Gerdes et al., 2018)?",
"And are these patterns consistent across typologically different languages?",
"To explore these questions, we fit the structural probe of Hewitt and Manning (2019) on pretrained BERT and ELMo representations, supervised by UD/SUD treebanks for 13 languages, and extract directed dependency trees.",
"We then conduct an extensive error analysis of the resulting probed parses, in an attempt to qualify our findings.",
"Our main contributions are the following:",
"1. A simple algorithm for deriving directed trees from the disjoint distance and depth probes introduced by Hewitt and Manning (2019).",
"2. A multilingual analysis of the probe's performance across 13 different treebanks.",
"3. An analysis showing that the syntactic information encoded by BERT and ELMo fit UD better than SUD for most languages.",
"There has been a considerable amount of recent work attempting to understand what aspects of natural language pre-trained encoders learn.",
"The classic formulation of these probing experiments is in the form of diagnostic classification (Ettinger et al., 2016; Belinkov et al., 2017; Hupkes et al., 2018; Conneau et al., 2018), which attempts to unearth underlying linguistic properties by fitting relatively underparameterised linear models over representations generated by an encoder.",
"These methods have also faced recent critique, for example, concerning the lack of transparency in the classifers' ability to extract meaningful information, as opposed to learning it.",
"Alternative paradigms for interpretability have therefore been proposed, such as correlation-based methods (Raghu et al., 2017; Saphra and Lopez, 2018; Kornblith et al., 2019; Chrupaa and Alishahi, 2019).",
"However, this critique does not invalidate diagnostic classification: indeed, more recent work (Hewitt and Liang, 2019) describes methods to show the empirical validity of certain probes, via control tasks.",
"Among probing studies specifically pertinent to our paper, Blevins et al. (2018) demonstrate that deep RNNs are capable of encoding syntax given a variety of pre-training tasks, including language modeling.",
"Peters et al. (2018b) demonstrate that, regardless of encoder (recurrent, convolutional, or self-attentive), biLM-based pre-training results in similar high-quality representations that implicitly encode a variety of linguistic phenomena, layer by layer.",
"Similarly, Tenney et al. (2019a) employ the edge probing' approach of Tenney et al. (2019b) to demonstrate that BERT implicitly learns the classi-cal NLP pipeline', with lower-level linguistic tasks encoded in lower layers and more complex phenomena in higher layers, and dependency syntax in layer 56.",
"Finally, Hewitt and Manning (2019) describe a syntactic probe for extracting aspects of dependency syntax from pre-trained representations, which we describe in Section",
"4. 3 Aspects of Syntax Syntax studies how natural language encodes meaning using expressive devices such as word order, case marking and agreement.",
"Some approaches emphasize the formal side and primarily try to account for the distribution of linguistic forms.",
"Other frameworks focus on the functional side to capture the interface to semantics.",
"And some theories use multiple representations to account for both perspectives, such as c-structure and f-structure in LFG (Kaplan and Bresnan, 1982; Bresnan, 2000) or surface-syntactic and deep syntactic representations in Meaning-Text Theory (Mel'cuk, 1988).",
"When asking whether neural language models learn syntax, it is therefore relevant to ask which aspects of syntax we are concerned with.",
"This is especially important if we probe the models by trying to extract syntactic representations, since these representations may be based on different theoretical perspectives.",
"As a first step in this direction, we explore two different dependency-based syntactic representations, for which annotations are available in multiple languages.",
"The first is Universal Dependencies (UD) (Nivre et al., 2016), a framework for cross-linguistically consistent morpho-syntactic annotation, which prioritizes direct grammatical relations between content words.",
"These relations tend to be more parallel across languages that use different surface features to encode the relations.",
"The second is Surface-Syntactic Universal Dependencies (SUD) (Gerdes et al., 2018), a recently proposed alternative to UD, which gives more prominence to function words in order to capture variations in surface structure across languages.",
"Figure 2 contrasts the two frameworks by showing how they annotate an English sentence.",
"While the two annotations agree on most syntactic relations (in black), including the analysis of core grammatical relations like subject (nsubj 1 ) and object (obj), they differ in the analysis of auxiliaries and prepositional phrases.",
"The UD annotation (in blue) treats the main verb chased as the root of the clause, while the SUD annotation (in red) assigns this role to the auxiliary has .",
"The UD annotation has a direct oblique relation between chased and room , treating the preposition from as a case marker, while the SUD annotation has an oblique relation between chased and from , analyzing room as the object of from .",
"The purpose of the UD style of 1 UD uses the nsubj relation, for nominal subject, while SUD uses a more general subj relation.",
"annotation is to increase the probability of the root and oblique relations being parallel in other languages that use morphology (or nothing at all) to encode the information expressed by auxiliaries and adpositions.",
"SUD is instead designed to bring out differences in surface structure in such cases.",
"The different treatment of function words affects not only adpositions (prepositions and postposi-tions) and auxiliaries (including copulas), but also subordinating conjunctions and infinitive markers.",
"Because of these systematic differences, dependency trees in UD tend to have longer average dependency length and smaller height 2 than in SUD.",
"To conduct our experiments, we make use of the structural probe proposed by Hewitt and Manning (2019), which is made up of two complementary components distance and depth.",
"The former is an intuitive proxy for the notion of two words being connected by a dependency: any two words w i , w j in a tree T are neighbors if their respective distance in the tree amounts to d T p w i , w j q 1 .",
"This metric can theoretically be applied to the vector space of any pretrained neural language model sentence encoding, which ouputs a set of vectors S h 1 , ..., h n for a sentence.",
"In practice, however, the distance between any two vectors t h i , h j u P S will not be directly comparable to their distance 2 The height of a tree is the length of the longest path from the root to a leaf (sometimes referred to as depth ).",
"in a corresponding syntactic tree T , because the model does not encode syntax in isolation.",
"To resolve this, Hewitt and Manning (2019) propose to learn a linear transformation matrix B , such that d B p h i , h j q extracts the distance between any two words w i , w j in a parse tree.",
"For an annotated corpus of L sentences, the distance probe can be learned via gradient descent as follows: min B L l 1 1 | n l | 2 i,j | d T l p w li , w lj q d B p h li , h lj q 2 | where | n l | is the length of sentence l , normalized by the number | n l | 2 of word pairs, and d T l p w li , w lj q is the distance of words w li and w lj in the gold tree.",
"While the distance probe can predict which words enter into dependencies with one another, it is insufficient for predicting which word is the head.",
"To resolve this, Hewitt and Manning (2019) employ a separate probe for tree depth, 3 where they make a similar assumption as they do for distance: a given (square) vector L2 norm || h 2 i || is analogous to w i 's depth in a tree T .",
"A linear transformation matrix B can therefore be learned in a similar way: min B L l 1 1 n l n i p|| w li || || B h li || 2 q where || w li || is the depth of a w li in the gold tree.",
"To be able to score probed trees (against UD and SUD gold trees) using the standard metric of unlabeled attachment score (UAS), we need to derive a rooted directed dependency tree from the information provided by the distance and depth probes.",
"Algorithm 1 outlines a simple method to retrieve a well-formed tree with the help of the Chu-Liu-Edmonds maximum spanning tree algorithm (Chu and Liu, 1965; McDonald et al., 2005).",
"Essentially, in a sentence S w 1 . . . w n , for every pair of nodes p w i , w j q with an estimated distance of d between them, if w i has smaller depth than w j , we set the weight of the arc p w i , w j q to d ; otherwise, we set the weight to 8 .",
"This is effectively a mapping from distances to scores, with larger distances resulting in lower arc scores from the parent to the child, and infinitely low scores from the child to the parent.",
"We also add a pseudo-root w 0 (essen-tial for decoding), which has a single arc pointing to the shallowest node (weighted 0).",
"We use the AllenNLP (Gardner et al., 2018) implementation of the Chu-Liu/Edmonds' algorithm.",
"In order to evaluate the extent to which a given model's representational space fits either annotation framework, we fit the structural probe on the model, layer by layer, using UD and SUD treebanks for supervision, and compute UAS over each treebank's test set as a proxy for a given layer's goodness-of-fit.",
"Language and Treebank Selection We reuse the sample of Kulmizev et al. (2019), which comprises 13 languages from different language families, with different morphological complexity, and with different scripts.",
"We use treebanks from UD v2.4 (Nivre et al., 2019) and their conversions into SUD.",
"4 Table 1 shows background statistics for the treebanks, including the percentage of adpositions (ADP) and auxiliaries (AUX), two important function word categories that are treated differently by UD and SUD.",
"A direct comparison of the UD and SUD representations shows that, as expected, UD has a higher percentage of relations directly connecting nouns and verbs (ContRel), higher average dependency length (DepLen) and lower average tree height (Height).",
"However, the magnitude of the difference varies greatly across languages.",
"5 Models We evaluate two pretrained language models: BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018a).",
"For BERT, we use the pretrained multilingual-bert-cased model provided by Google.",
"6 The model is trained on the concatenation of WikiDumps for the top 104 languages with the largest Wikipedias and features a 12-layer Transformer with 768 hidden units and 12 self-attention heads.",
"For ELMo, we make use of the pretrained monolingual models made available by Che et al. (2018).",
"These models are trained on 20 million words randomly sampled from the concatenation of WikiDump and CommonCrawl datasets for 44 different languages, including our 13 languages.",
"Each model features a character-based word embedding layer, as well as 2 bi-LSTM layers, each of which is 1024-dimensions wide.",
"Though we fit the probe on all layers of each model separately, we also learn a weighted average over each full model: model i L j 0 s j h i,j where s j is a learned parameter, h i,j is the encoding of word i at layer j , and L is the number of layers.",
"We surmise that, in addition to visualizing the probes' fit across layers, this approach will give us a more general notion of how well either model aligns with the respective frameworks.",
"We refer to this representation as the 13th BERT layer and the 3rd ELMo layer.",
"When determining the dimensionality of the transformation matrix (i.e. probe rank), we defer to each respective encoder's hidden layer sizes.",
"However, preliminary experiments indicated that probing accuracy was stable across ranks of decreasing sizes.",
"4 https://surfacesyntacticud.github.io/data/ 5 For Chinese, UD actually has slightly lower average dependency length than SUD.",
"6 https://github.com/google-research/bert Language Code Treebank # Sents %ADP %AUX %ContRel Dep Len Height UD SUD UD SUD UD SUD Arabic arb PADT 6075 15 1 37 24 4.17 3.92 7.20 9.82 Chinese cmn GSD 3997 5 3 37 30 3.72 3.74 4.30 6.56 English eng EWT 12543 8 6 20 12 3.13 2.94 3.48 5.11 Basque eus BDT 5396 2 13 34 25 2.99 2.90 3.49 4.18 Finnish fin TDT 12217 2 7 35 30 2.98 2.91 3.42 4.22 Hebrew heb HTB 5241 14 2 28 14 3.76 3.53 5.07 7.30 Hindi hin HDTB 13304 22 9 26 10 3.44 3.05 4.25 7.41 Italian ita ISDT 13121 14 5 21 8 3.30 3.12 4.21 6.28 Japanese jap GSD 7125 25 14 31 10 2.49 2.08 4.40 8.18 Korean kor GSD 4400 2 0 58 57 2.20 2.17 3.86 4.07 Russian rus SynTagRus 48814 10 1 31 22 3.28 3.13 4.21 5.24 Swedish swe Talbanken 4303 12 5 29 17 3.14 2.98 3.50 5.02 Turkish tur IMST 3664 3 2 33 30 2.21 2.12 3.01 3.37 Average -10784.62 12 5 32 22 3.14 3.00 4.20 5.91 Table 1: Treebank statistics: number of sentences (# Sents) and percentage of adpositions (ADP) and auxiliaries (AUX).",
"It is important to note that by probe we henceforth refer to the algorithm that combines both distance and depth probes to return a valid tree.",
"One could argue that, per recent insights in the interpretability literature (e.g. (Hewitt and Liang, 2019)), this model is too expressive in that it combines supervision from two different sources.",
"We do not consider this a problem, as the two probes are trained separately and offer views into two different abstract properties of the dependency tree.",
"As such, we do not optimize for UAS directly.",
"Figure 3 displays the UAS after fitting the structural probes on BERT and ELMo, per language and layer.",
"What is perhaps most noticeable is that, while BERT can achieve accuracies upwards of 79 UAS on some languages, ELMo fares consistently worse, maxing out at 65 for Hindi at layer",
"2. The most likely explanation for this is that the ELMo models are smaller than the multilingual BERT's 12-layer Transformer-based architecture, which was trained on orders of magnitude more data (albeit multilingually).",
"In general, we find that the probing performance is stable across languages, where layers 78 fare the best for BERT and layer 2 for ELMo.",
"7 This contrasts with prior observations (Tenney et al., 2019a), as the syntactic center of gravity' is placed higher in each model's hierarchy.",
"However, computing 7 It is important to note that layer 0 for ELMo is the nonrecurrent embedding layer which contains no contextual information.",
"a weighted average over layers tends to produce the best overall performance for each model, indicating that the probe can benefit from information encoded across various layers.",
"Once we compare the averaged results across syntactic representations, a preference for UD emerges, starting in layer 3 in BERT and layer 2 in ELMo.",
"We observe the max difference in favor of UD in layer 7 for BERT, where the probe performs 3 UAS points better than SUD, and in the weighted average (layer 13), with 4 UAS points.",
"The difference for the 13th BERT and 3rd ELMo layers is statistically significant at p 0 .",
"05 (Wilcoxon signed ranks test).",
"A further look at differences across languages reveals that, while most languages tend to overwhelmingly prefer UD, there are some that do not: Basque, Turkish, and, to a lesser extent, Finnish.",
"Furthermore, the preference towards SUD in these languages tends to be most pronounced in the first four and last two layers of BERT.",
"However, in the layers where we tend to observe the higher UAS overall (78), this is minimized for Basque/Turkish and almost eliminated for Finnish.",
"Indeed, we see the strongest preferences for UD in these layers overall, where Italian and Japanese are overwhelmingly pro-UD, to the order of 10+ UAS points.",
"Overall, we note that some languages consistently achieve higher accuracy, like Russian with 71/69 UAS for UD/SUD for BERT, while others fare poorly, like Turkish (52/43) and Chinese (51/46).",
"In the case of these languages, one can observe an obvious relation to the size of our reference treebanks, where Russian is by far the largest and Turkish and Chinese are the smallest.",
"To test the extent to which training set size affects probing accuracy, we trained our probe on the same treebanks, truncated to the size of the smallest one Turkish, with 3664 sentences.",
"Though we did observe a decline in accuracy in the largest treebanks (e.g. Russian, Finnish, and English) for some layers, the difference in aggregate was minimal.",
"Furthermore, the magnitude of the difference in UD and SUD probing accuracy was almost identical to that of the probes trained on full treebanks, speaking to the validity of our findings.",
"We refer the reader to Appendix A for these results.",
"Given that our findings seem to generally favor UD, another question we might ask is: are SUD treebanks simply harder to parse?",
"This may seem like a straight-forward hypothesis, given SUD's tendency to produce higher trees in aggregate, which may affect parsing accuracy even in the fully supervised case.",
"To test this, we trained UD and SUD parsers using the UDify model (Kondratyuk and Straka, 2019), which employs a biaffine attention decoder (Dozat and Manning, 2016) after fine-tuning BERT representations (similar to our 13th layer).",
"The results showed a slightly higher average UAS for UD (89.9 vs. 89.6) and a slightly higher LAS for SUD (86.8 vs. 86.5).",
"Neither difference is statistically significant (Wilcoxon signed ranks test), which seems to rule out an alternative explanation in terms of learnability.",
"We include the full range of results in Appendix B. In addition to this, we tested how well each framework's probing accuracy related to supervised UAS across languages.",
"We computed this measure by taking the Pearson correlation of each BERT probe's layer accuracy (per-language) with its respective framework accuracy.",
"All correlations proved to be significant at p 0 .",
"05 , with the exception of UD and SUD at layer",
"1. Figure 4 displays these results.",
"Here, we observe that probing accuracies correlate more strongly with supervised UAS for UD than for SUD.",
"We can interpret this to mean that the rate at which trees are decoded by the UD probe is more indicative of how well they can be parsed given a full view of their structure, rather than vice-versa.",
"Although correlation is an indirect measure here, we can still accept it to be in support of our general findings.",
"In order to gain a better understanding of these probing patterns, we move on to an error analysis over the dev sets of each treebank, as fit by the averaged models.",
"Figure 5 shows probe accuracy for different models (BERT/ELMo) and syntactic representations (UD/SUD) when attaching words of specific part-of-speech categories to their heads.",
"The general pattern is that we observe higher accuracy for UD for both models on all categories, the only exceptions being a slightly higher accuracy for both models on PRON and for ELMo on VERB and X. 8 However, the differences are generally 8 The X category is unspecified and extremely rare.",
"greater for function words, in particular ADP, AUX, SCONJ, PART and DET. In some respects, this is completely expected given the different treatment of these words in UD and SUD, and we can use the case of adpositions (ADP) to illustrate this.",
"In UD, the preposition from in a phrase like from the room is simply attached to the noun room , which is in general a short relation that is easy to identify.",
"In SUD, the relation between the preposition and the noun is reversed, and the preposition now has to be attached to whatever the entire phrase modifies, which often means that difficult attachment ambiguities need to be resolved.",
"However, exactly the same ambiguities need to be resolved for nominal words (NOUN, PRON, PROPN) in the UD representation, but there is no corresponding drop in accuracy for these classes in UD (except very marginally for PRON).",
"Similar remarks can be made for other function word categories, in particular AUX, SCONJ and PART.",
"It thus seems that the UD strategy of always connecting content words directly to other content words, instead of sometimes having these relations mediated by function words, results in higher accuracy overall when applying the probe to the representations learned by BERT and ELMo.",
"The behavior of different part-of-speech classes can also explain some of the differences observed across languages.",
"In particular, as can be seen in Table 1, most of the languages that show a clear preference for UD Chinese, Hebrew, Hindi, Italian and Japanese are all characterized by a high proportion of adpositions.",
"Conversely, the three languages that exhibit the opposite trend Basque, Finnish and Turkish have a very low proportion of adpositions.",
"The only language that does not fit this pattern is Chinese, which has a low percentage of adpositions but nevertheless shows a clear preference for UD.",
"Finally, it is worth noting that Korean shows no clear preference for either representation despite having a very low proportion of adpositions (as well as other function words), but this is due to the more coarse-grained word segmentation of the Korean treebank, which partly incorporates function words into content word chunks.",
"9 6.4 Sentence and Tree Properties Figure 6 depicts probing accuracy across different sentence lengths, dependency lengths, and dis-9 This is reflected also in the exceptionally high proportion of direct content word relations; cf.",
"tances to root.",
"It is apparent that, despite the abso-lute differences between models, the relative differences between representations are strikingly consistent in favor of UD.",
"For example, while the probe shows identical accuracy for the two representations for sentences of length 110, SUD decays more rapidly with increasing sentence length.",
"Furthermore, while the SUD probe is slightly more accurate at detecting sentence roots and their immediate dependencies, we observe a consistent advantage for dependencies of length 2+, until dropping off for the longest length bin of 10+.",
"Though Table 1 indicates that UD dependencies are slightly longer than those of SUD, this factor does not appear to influence the probe, as there are no significant correlations between differences in average dependency length and differences in UAS.",
"We observe a similar curve for varying distances to root, where the SUD probe performs slightly better than UD at the shortest distance, but decays faster for nodes higher in the tree.",
"In general, UD trees have lower height than SUD (see Table 1), which implies that tree height could be a major fac-Figure 6: UAS across sentence length bins (top); F1 across varying dependency lengths (middle); F1 across varying distances to root (bottom) tor at play here.",
"To verify this, we conducted a Pearson correlation test between the average increase in height from UD to SUD and the difference of the UD/SUD probe UAS per language.",
"This test returned 0 .",
"82 , p 0 .",
"001 , indicating that height is indeed crucial in accurately decoding trees across the two formalisms.",
"In an attempt to visualize how this may play out across languages, we plotted the per-sentence difference in probing accuracy between UD/SUD as a function of the difference in height of the respective gold UD/SUD trees.",
"Figure 7 depicts these results for BERT, where the x-axis indicates how many nodes higher a SUD tree is with respect to its reference UD tree.",
"It is apparent from Figure 7 that the preference Figure 7: Differences in the BERT probe's UAS (UD ` , SUD ) as a function of tree height per number of nodes (higher SUD tree ` , higher UD tree ), with smoothed means and 95% confidence ellipses as implemented in ggplot2 ) for UD can be largely explained via its lower tree height.",
"If we first examine Korean, the segmentation of which results in the smallest difference in height overall, we observe a distribution that is roughly centered around zero on both axes.",
"If we instead refer to the UD-preferring languages (Chinese, Hebrew, Hindi, Italian, and Japanese), we notice a strong skew of distributions towards the top right of the plot.",
"This indicates",
"(i) that the trees in these samples are higher for SUD and",
"(ii) that the corresponding sentences are easier to decode in UD.",
"By contrast, for the SUD-preferring languages (Basque, Finnish, and Turkish), we observe narrow distributions centered around 0 (sim-ilar to that of Korean), indicating minimal variation in tree height between UD and SUD.",
"What these language have in common is an agglutinative morphology, which means that they rely more on morphological inflection to indicate relationships between content words, rather than separate function words.",
"Sentences in these languages are therefore less susceptible to variations in tree height, by mere virtue of being shorter and possessing fewer relations that are likely be a better fit for UD, like those concerning adpositions.",
"We speculate that it is this inherent property that explains the layerwise preference for SUD (though a general indifference in aggregate), allowing for some language-specific properties, like the crucial role of auxiliaries in Basque, to be easier to probe for in SUD.",
"Conversely, with this in mind, it becomes easy to motivate the high preference for UD across some languages, given that they are not agglutinating and make heavy use of function words.",
"If we take the probe to be a proper decoding of a model's representational space, the encoding of syntactic structure according to an SUD-style analysis then becomes inherently more difficult, as the model is required to attend to hierarchy between words higher in the tree.",
"Interestingly, however, this does not seem to correspond to an increased difficulty in the case of supervised parsing, as observed earlier.",
"We have investigated the extent to which the syntactic structure captured by neural language models aligns with different styles of analysis, using UD treebanks and their SUD conversions as proxies.",
"We have extended the structural probe of Hewitt and Manning (2019) to extract directed, rooted trees and fit it on pretrained BERT and ELMo representations for 13 languages.",
"Ultimately, we observed a better overall fit for the UD-style formalism across models, layers, and languages, with some notable exceptions.",
"For example, while the Chinese, Hebrew, Hindi, Italian, and Japanese models proved to be overwhelmingly better-fit for UD, Basque aligned more with SUD, and Finnish, Korean and Turkish did not exhibit a clear preference.",
"Furthermore, an error analysis revealed that, when attaching words of various part-of-speech tags to their heads, UD fared better across the vast majority of categories, most notably adpositions and determiners.",
"Related to this, we found a strong correlation between differences in average tree height and the tendency to prefer one framework over the other.",
"This suggested a tradeoff between morphological complexity where differences in tree height between UD and SUD are minimal and probing accuracy similar and a high proportion of function words where SUD trees are signifi-cantly higher and probing accuracy favors UD.",
"For future work, besides seeking a deeper understanding of the interplay of linguistic factors and tree shape, we want to explore probes that combine the distance and depth assumptions into a single transformation, rather than learning separate probes and combining them post-hoc, as well as methods for alleviating treebank supervision altogether.",
"Lastly, given recent criticisms of probing approaches in NLP, it will be vital to revisit the insights produced here within a non-probing framework, for example, using Representational Similarity Analysis (RSA) (Chrupaa and Alishahi, 2019) over symbolic representations from treebanks and their encoded representations.",
"We want to thank Miryam De Lhoneux, Paola Merlo, Sara Stymne, and Dan Zeman and the ACL reviewers and area chairs for valuable feedback on preliminary versions of this paper.",
"We acknowledge the computational resources provided by CSC in Helsinki and Sigma2 in Oslo through NeIC-NLPL ( www.nlpl.eu ).",
"Joakim Nivre's contributions to this work were supported by grant 2016-01817 of the Swedish Research Council."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"This paper presents a novel, data-driven language model that produces entire lyrics for a given input melody.",
"Previously proposed models for lyrics generation suffer from the inability of capturing the relationship between lyrics and melody partly due to the unavailability of lyrics-melody aligned data.",
"In this study, we first propose a new practical method for creating a large collection of lyrics-melody aligned data and then create a collection of 1,000 lyrics-melody pairs augmented with precise syllable-note alignments and word/sentence/paragraph boundaries.",
"We then provide a quantitative analysis of the correlation between word/sentence/paragraph boundaries in lyrics and melodies.",
"We then propose an RNN-based lyrics language model conditioned on a featurized melody.",
"Experimental results show that the proposed model generates fluent lyrics while maintaining the compatibility between boundaries of lyrics and melody structures.",
"Writing lyrics for a given melody is a challenging task.",
"Unlike prose text, writing lyrics requires both knowledge and consideration of music-specific properties such as the structure of melody, rhythms, etc. (Austin et al., 2010; Ueda, 2010).",
"A simple example is the correlation between word boundaries in lyrics and the rests in a melody.",
"As shown in Figure 1, a single word spanning beyond a long melody rest can sound unnatural.",
"When writing lyrics, a lyricist must consider such constraints in content and lexical selection, which can impose extra cognitive loads.",
"This consideration when writing lyrics has motivated a wide-range of studies for the task of computer-assisted lyrics writing (Barbieri et al., 2012; Abe and Ito, 2012; Potash et al., 2015; Watanabe et al., 2017).",
"Such studies aim to model the @@ Flute 1 , 2)) ) ) ) ) ) ) ) ) )!",
"Fl 17 , 7) ) ) ) ) 7)!",
".",
") #) ) ) ) 7) ) ) ) ) ( Fl 21 , ) ) ) ) ) 7) )!",
"#) ) ) ) 7) )!",
") ) ) ' Fl 25 )! ) ) ) ) #) ) )! ) ) ) )! -)! ) ) ) 7)! . ) ) Fl 29 )! ) ) #) ) ) )! ) ) ) ) #) ) )! ) ) ) ) 7) )!#) ) ) ) 7) Fl 33 )! ) ) ) )! -)! 7) ( ) , )! ) ) ) )! 7) ) ) ) Fl 37 ( , ) ) (! 7) )! #) ) ) ) ) ) ' Fl 41 Fl 45 Fl 49 @@ Flute 1 , 2)) ) ) ) ) ) ) ) ) )!",
"FUNC indicates a function word.",
"The song is from the RWC Music Database (RWC-MDB-P-2001 No.20) (Goto et al., 2002).",
"language in lyrics and to design a computer system for assisting lyricists in writing.",
"They propose to constrain their models to generate only lyrics that satisfy given conditions on syllable counts, rhyme positions, etc.",
"However, such constraints are assumed to be manually provided by a human user, which requires the user to interpret a source melody and transform their interpretation to a set of constraints.",
"To assist users with transforming a melody to constraints, a language model that automatically captures the relationship between lyrics and melody is required.",
"Some studies (Oliveira et al., 2007; Oliveira, 2015; Nichols et al., 2009) have quantitatively analyzed the correlations between melody and phonological aspects of lyrics (e.g., the relationship between a beat and a syllable stress).",
"However, these studies do not address the relationship between melody and the discourse structure of lyrics.",
"Lyrics are not just a sequence of syllables but a meaningful sequence of words.",
"Therefore, it is desirable that the sentence/paragraph boundaries are determined based on both melody rests and context words.",
"Considering such line/paragraph structure of lyrics, we present a novel language model that gen-163 erates lyrics whose word, sentence, and paragraph boundaries are appropriate for a given melody, without manually transforming the melody to syllable constraints.",
"This direction of research has received less attention because it requires a large dataset consisting of aligned pairs of melody and segment boundaries of lyrics which has yet to exist.",
"To address this issue, we leverage a publicly-available collection of digital music scores and create a dataset of digital music scores each of which specifics a melody score augmented with syllable information for each melody note.",
"We collected 1,000 Japanese songs from an online forum where many amateur music composers upload their music scores.",
"We then automatically aligned each music score with the raw text data of the corresponding lyrics in order to augment it with the word, sentence, and paragraph boundaries.",
"The availability of such aligned, parallel data opens a new area of research where one can conduct a broad range of data-oriented research for investigating and modeling correlations between melodies and discourse structure of lyrics.",
"In this paper, with our melody-lyrics aligned songs, we investigate the phenomena that",
"(i) words, sentences, and paragraphs rarely span beyond a long melody rest and",
"(ii) the boundaries of larger components (i.e., paragraphs) tend to coincide more with longer rests.",
"To the best of our knowledge, there is no previous work that provides any quantitative analysis of this phenomenon with this size of data (see Section 7).",
"Following this analysis, we build a novel, data-driven language model that generates fluent lyrics whose sentence and paragraph boundaries fit an input melody.",
"We extend a Recurrent Neural Network Language Model (RNNLM) (Mikolov et al., 2010) so that its output can be conditioned on a featurized melody.",
"Both our quantitative and qualitative evaluations show that our model captures the consistency between melody and boundaries of lyrics while maintaining word fluency.",
"Our goal is to create a melody-conditioned language model that captures the correlations between melody patterns and discourse segments of lyrics.",
"The data we need for this purpose is a collection of melody-lyrics pairs where the melody and lyrics are aligned at the level of not only note-syllable alignment but also discourse components @@ 1 , 2)) ) ) ) ) ) ) ) ) )!",
")! ) ) ) ) #) ) )! ) ) ) )! -)! ) 29 )! ) ) #) ) ) )! ) ) ) ) #) ) )! ) ) ) ) 7) )!#) 33 )! ) ) ) )! )! 7) ( ) , )! ) ) ) )! 37 ( , ) ) (! 7 ) ) !#) ) ) ) ) ) '",
"(i.e., word/sentence/paragraph boundaries) of a lyric, as illustrated in the bottom of Figure",
"2. We create such a dataset by automatically combining two types of data available from online forum sites: digital music score data (the top of Figure 2) and raw lyrics data (the middle).",
"41 45 49",
"A digital music score specifies a melody score augmented with syllable information for each melody note (see the top of Figure 2).",
"Score data augmented in this way is sufficient for analyzing the relationship between the phonological aspects of lyrics and melody, but it is insufficient for our goal since the structural information of the lyrics is not included.",
"We thus augment score data further with boundaries of sentences, and paragraphs, where we assume that sentences and paragraphs of lyrics are approximately captured by lines and blocks , 1 respectively, of the lyrics in the raw text.",
"The integration of music scores and raw lyrics is achieved by (1) applying a morphological analyzer 2 to raw lyrics for word segmentation and Chinese character pronunciation prediction and (2) aligning music score with raw lyrics at the syllable level as illustrated in Figure",
"2. For this alignment, we employ the Needleman-Wunsch algorithm (Needleman and Wunsch, 1970).",
"This alignment process is reasonably accurate because it fails in principle only when the morphological analysis fails in Chinese character pronunciation prediction, which occurs for only less than 1% of the words in the data set.",
"With this procedure, we obtained 54,181 Japanese raw lyrics and 1,000 digital musical 1 Blocks are assumed to be segmented by empty lines.",
") ) 7) ) )",
"3 Correlations between melody and lyric In this section, we examine two phenomena related to boundaries of lyrics: (1) the positions of lyrics segment boundaries are biased to melody rest positions, and (2) the probability of boundary occurrence depends on the duration of a rest, i.e., a shorter rest tends to be a word boundary and a longer rest tends to be a block boundary, as shown in Figure",
"3. All analyses were performed on the training split of the melody-lyrics alignment data, which is described in Section",
")! ) ) ) )! -)! 7) ( ) , )! ) ) ) )! 7) Fl 37 ( , ) ) (! 7) )!#) ) ) ) ) ) '",
"BOB indicates a block boundary.",
"scores from online forum sites 3 ; we thus created 1,000 melody-lyrics pairs.",
"We refer to these 1,000 melody-lyrics pairs as a melody-lyrics alignment data 4 and refer to the remaining 53,181 lyrics without melody as a raw lyrics data.",
"We randomly split the 1,000 melody-lyrics alignments into two sets: 90% for analyzing/training and the remaining 10% for testing.",
"From those, we use 20,000 of the most frequent words whose syllable counts are equal to or less than 10, and converted others to a special symbol h unknown i .",
"All of the digital music score data we collected were distributed in the UST format, a common file format designed specifically for recently emerging computer vocal synthesizers.",
"While we focus on Japanese music in this study, our method for data creation is general enough to be applied to other language formats such as MusicXML and ABC, because transferring such data formats to UST is straightforward.",
"2. For the first phenomenon, we first calculated the distribution of boundary appearances at the positions of melody notes and rests.",
"Here, by the boundary of a line (or block), we refer to the position of the beginning of the line (or block).",
"5 In Figure 3, we say, for example, that the boundary of the first block beginning te-ra-shi te coincides with Rest#1.",
"The result, shown at the top of Figure 4, indicates that line and block boundaries are strongly biased to rest positions and are far less likely to appear at note positions.",
"Words, lines, and blocks rarely span beyond a long melody rest.",
"The bottom of Figure 4 shows the detailed distributions of boundary occurrences for different durations of melody rests, where durations of 480 and 1920 correspond to a quarter rest and a whole rest, respectively.",
"The results exhibit a clear, strong tendency that the boundaries of larger segments tend to coincide more with longer rests.",
"To the best of our knowledge, this is the first study that has ever provided such strong empirical evidence for the phenomena related to the correlations between lyrics segments and melody rests.",
"It is also important to note that the choice of segment boundaries looks like a probabilistic process (i.e., there is a long rest without a block boundary).",
"This observation suggests the difficulty of describing the correlations of lyrics and melody in a rule-based fashion and motivates our probabilistic approach as we present in the next section.",
"Our goal is to build a language model that generates fluent lyrics whose discourse segment fit a given melody in the sense that generated segment boundaries follow the distribution observed in Section",
"3. We propose to pursue this goal by conditioning a 5 The beginning of a line/block and the end of a line/block are equivalent since there is no melody between the end and beginning of a line/block.",
", ,",
".",
") #) ) ) ) 7) ) ) ) ) ( 21 , ) ) ) ) ) 7)",
".",
") #) ) ) ) 7) ) ) ) ) ( 21 , ) ) ) ) ) 7)",
")!#) ) ) ) 7) )!",
")!#) ) ) ) 7) )!",
"standard RNNLM with a featurized input melody.",
"We call this model a Melody-conditioned RNNLM .",
"The network structure of the model is illustrated in Figure 5.",
"Formally, we are given a melody m = m 1 ,...,m i ,...,m I that is a sequence of notes and rests, where m includes a pitch and a duration information.",
"Our model generates lyrics w = w 1 ,...,w t ,...,w T that is a sequence of words and segment boundary symbols: h BOL i and h BOB i , special symbols denoting a line and a block boundary, respectively.",
"For each time step t , the model outputs a single word or boundary symbol taking a pair of the previously generated word w t 1 and the musical feature vector n t for the current word position which includes context window-based features that we describe in the following section.",
"In this model, we assume that the syllables of the generated words and the notes in the input melody have a one-to-one correspondence.",
"Therefore, the position of the incoming note/rest for a word position t (referred to as a target note for t ) is uniquely determined by the syllable counts of the previously generated words.",
"6 The target note for t is denoted as m i ( t ) by defining a function i ( ) which maps time step t to the index of the next note in t .",
"In the following sections, we first describe the details of the proposed model and then present the training strategies used to obtain better models with our melody-lyrics alignment data.",
"Here, the challenging issue with this model is training.",
"Generally, language models require a large amount of text data to learn well.",
"Moreover, this is also the case for learning correlation between rest positions and syllable counts .",
"As shown in Figure 4, most words are supposed to not overlap a 6 Note that our melody-lyrics alignment data used in training does not make this assumption, but we can still uniquely identify the positions of target notes based on the obtained melody-word alignment.",
"long rest.",
"This means, for example, that when the incoming melody sequence for a next word position is note, note, (long) rest, note, note , as the sequence following to m i ( t 1) in Figure 5, it is desirable to select a word whose syllable count is two or less so that the generated word does not overlap the long rest.",
"If there is sufficient data available, this tendency may be learned directly from the correlation between rests and words without explicitly considering the syllable count of a word.",
"However, our melody-lyrics alignments for 1,000 songs are insufficient for this purpose.",
"We take two approaches to address this data sparsity problem.",
"First, we propose two training strategies that increase the number of training examples using raw lyrics that can be obtained in greater quantities.",
"Second, we construct a model that predicts the number of syllables in each word, as well as words themselves, to explicitly supervise the correspondence between rest positions and syllable counts.",
"The proposed model is based on a standard RNNLM (Mikolov et al., 2010):",
"Q where context words are encoded using LSTM (Hochreiter and Schmidhuber, 1997) and the probabilities over words are calculated by a softmax function.",
"w 0 = h B i is a symbol denoting the beginning of lyrics.",
"We extend this model such that each output is conditioned by the context melody vectors n 1 , ..., n t , as well as previous words: P ( w | m ) = Q Tt =1 P ( w t | w 0 , ..., w t 1 , n 1 , ..., n t ) .",
"(2) The model simultaneously predicts the syllable counts of words by sharing the parameters of LSTM with the above word prediction model in order to learn the correspondence between the melody segments and syllable counts: P ( s | m ) = Q Tt =1 P ( s t | w 0 , ..., w t 1 , n 1 , ..., n t ) , (3) where s = s 1 , ..., s T is a sequence of syllable counts, which corresponds to w .",
"For each time step t , the model outputs a word distribution y tw RV and a distribution of syllable count y ts RS using a softmax function:",
"y tw = softmax(BN( W w z t )) , y ts = softmax(BN( W s z t )) ,",
"where z t is the output of the LSTM for each time step.",
"V is the vocabulary size and S is the syllable count threshold.",
"7 W w and W s are weight matrices.",
"BN denotes batch normalization (Ioffe and Szegedy, 2015).",
"The input to the LSTM in each time step t is a concatenation of the embedding vector of the previous word v ( w t 1 ) and the context melody representation x tn , which is a nonlinear transformation of the context melody vector n t : x t = [ v ( w t 1 ) , x tn ] , (6) x tn = ReLU( W n n t + b n ) , (7) where W n is a weight matrix and b n is a bias.",
"To generate lyrics, the model searches for the word sequence with the greatest probability (Eq. 2) using beam search.",
"The model stops generating lyrics when the syllable count of the lyrics reaches the number of notes in the input melody.",
"Note that our model is not specific to the language of lyrics.",
"The model only requires the sequences of melody, words, and syllable counts and does not use any language-specific features.",
"In Section 3, we indicated that the positions of rests and their durations are important factors for modeling boundaries of lyrics.",
"Thus, we collect a sequence of notes and rests around the current word position (i.e., time step t ) and encode their information into context melody vector n t (see the bottom of Figure 5).",
"The context melody vector n t is a binary feature vector that includes a musical notation type (i.e., note or rest), a duration 8 , and a pitch for each note/rest in the context window.",
"We collect notes and rests around the target note m i ( t ) for the current word position t with a window size of 10 (i.e., m i ( t ) 10 , ..., m i ( t ) , ..., m i ( t )+10 ).",
"For pitch information, we use a gap (pitch interval) between a target note m i ( t ) and its previous 7 The syllable counts of the h BOL i and h BOB i are zero.",
"8 We rounded each duration to one of the values 60, 120, 240, 360, 480, 720, 960, 1200, 1440, 1680, 1920, and 3840 and use one-hot encoding for each rounded duration.",
"note m i ( t 1) .",
"Here, the pitch is represented by a MIDI note number in the range 0 to 127.",
"For example, the target and its previous notes are 68 and 65, respectively, and the gap is +3 .",
"Pretraining The size of our melody-lyrics alignment data is limited.",
"However, we can obtain a large amount of raw lyrics.",
"We, therefore, pretrain the model with 53,181 raw lyrics and then fine-tune it with the melody-lyrics alignment data.",
"In pretraining, all context melody vectors n t are zero vectors.",
"We refer to these pretrained and fine-tuned models as Lyrics-only and Fine-tuned models, respectively.",
"Learning with pseudo-melody We propose a method to increase the melody-lyrics alignment data by attaching pseudo melodies to the obtained 53,181 raw lyrics.",
"We refer to the model that uses this data as the Pseudo-melody model.",
"Algorithm 1 shows the details of pseudo-melody generation.",
"For each syllable in the lyrics, we first assign a note to the syllable by sampling the probability distributions.",
"The pitch of each note is generated based on the trigram probability.",
"Then, we determine whether to generate a rest next to it.",
"Since we established the correlations between rests and boundaries of lyrics in Section 3, the probability for a rest and its duration is conditioned by a boundary 167 type next to the target syllable.",
"melody-lyrics alignment data.",
"Figure 6 shows the distributions of the number of boundaries in the pseudo data.",
"The distributions closely resemble those of gold data in Figure 4.",
"We evaluate the proposed Melody-conditioned RNNLMs quantitatively based on two evaluation metrics: (1) a test set perplexity for measuring the fluency; (2) a line/block boundary replication task for measuring the consistency between the melody and boundaries in the generated lyrics.",
"In our model, we chose the dimensions of the word embedding vectors and context melody representation vectors to 512 and 256, respectively, and the dimension of the LSTM hidden state was 768.",
"We used a categorical cross-entropy loss for outputs y tw and y ts , Adam (Kingma and Ba, 2014) with an initial learning rate of 0.001 for parameter optimization, and a mini-batch size of 32.",
"We applied an early-stopping strategy with a maximum epoch number of 100, and training was terminated after five epochs of unimproved loss on the validation set.",
"For lyrics generation, we used a beam search with a width of 10.",
"An example of the generated lyrics is shown in the supplemental material.",
"Perplexity Test-set perplexity (PPL) is a standard evaluation measure for language models.",
"PPL measures the predictability of wording in original lyrics, where a lower PPL value indicates that the model can generate fluent lyrics.",
"We used PPL and its variant PPL-W, which excludes line/block boundaries, to investigate the predictability of words.",
"Accuracy of boundary replication Under the assumption that the line and block boundaries of the original lyrics are placed at appropriate positions in the melody, we evaluated consistency between the melody and boundaries in the generated lyrics by measuring the reproducibility of the boundaries in the original lyrics.",
"Here the metric we used was F 1 -measure of the boundary positions.",
"We also asked a person to place line and block boundaries at plausible positions for randomly selected 10 input melodies that the evaluator has Perplexity F 1 -measure Model PPL PPL-W BOB BOL UB Lyrics-only 138.0 225.0 0.121 0.061 0.106 Full-data 135.9 222.1 0.122 0.063 0.108 Alignment-only 173.3 314.8 0.298 0.287 0.477 Heuristic 175.8 284.7 0.373 0.239 0.402 Fine-tuned 152.2 275.5 0.260 0.302 0.479 Pseudo-melody 115.7 197.5 0.318 0.241 0.406 (w/o y s ) Fine-tuned 155.1 278.1 0.318 0.241 0.366 Pseudo-melody 118.0 201.5 0.312 0.250 0.406 Human -0.717 0.671 0.751 Table 1: Results of the quantitative evaluation.",
"never heard.",
"This person is not a professional musician but an experienced performer educated on musicology.",
"The bottom part of Table 1 represents the human performance.",
"To investigate the effect of our language models, we compared the following six models.",
"The first one is (1) a Lyrics-only model, a standard RNNLM trained with 54,081 song lyrics without melody information.",
"The second and third ones are baseline Melody-conditioned RNNLMs where the proposed training strategies are not applied: (2) a Full-data model trained with mixed data (54,081 song lyrics and 900 melody-lyrics alignments of those), and (3) an Alignment-only model trained with only 900 melody-lyrics alignment data.",
"The fourth one is a strong baseline to evaluate the performance of the proposed approaches: (4) a Heuristic model that",
"(i) assigns a line/block boundary to a rest based on its duration with the same probability, as reported in Figure 4, and",
"(ii) fills the space between any two boundaries with lyrics of the appropriate syllable counts.",
"This Heuristic model computes the following word probability: P ( w t | w 0 , ..., w t 1 , m ) = (8) Q ( h BOB i| m i ( t +1) ) (if w t = h BOB i ) Q ( h BOL i| m i ( t +1) ) (if w t = h BOL i ) (1 Q ( h BOB i| m i ( t +1) ) Q ( h BOL i| m i ( t +1) )) PLSTM ( w t | w 0 ,...,w t 1 ) 1 PLSTM ( h BOL i| w 0 ,...,w t 1 ) PLSTM ( h BOB i| w 0 ,...,w t 1 ) (otherwise) where Q is the same probability as reported in Figure 4.",
"PLSTM is the word probability calculated by a standard LSTM language model.",
"The remaining two are Melody-conditioned RNNLMs with the proposed learning strategies: (5) Fine-tuned and (6) Pseudo-melody models.",
"The top part of Table 1 summarizes the performance of these models.",
"Regarding the boundary replication, the Heuristic , Alignment-only , Fine-tuned , and Pseudo-melody models achieved higher performance than the Lyrics-only model for unlabeled matching of line/block boundaries (i.e., UB).",
"This result indicates that our Melody-conditioned RNNLMs successfully capture the consistency between melody and boundaries of lyrics.",
"The results of the Full-data model is low (as expected) because the size of the melody-lyrics alignment data is far smaller than that of the raw lyrics data and this harms the learning process of the dependency between melody and lyrics.",
"For the block boundary, the Heuristic model achieved the best performances.",
"For the line boundary, the Fine-tuned model achieved the best performances.",
"Regarding PPL and PPL-W, the Lyrics-only , Full-data , and Pseudo-melody models show better results than the other models.",
"The Fine-tuned model shows reduced performance compared with the Lyrics-only model because fine-tuning with a small amount of data causes overfitting in the language model.",
"Also, the training size of the Alignment-only model is insufficient for learning a language model of lyrics.",
"Interestingly, the Pseudo-melody model achieved better performance than the Full-data model and overall achieved the best score.",
"This result indicates that the Pseudo-melody model uses the information of a given melody to make a better prediction of its lyrics word sequence.",
"On the other hand, the Heuristic model had the worst performance, despite training with a large amount of raw lyrics.",
"We analyze the reason for such performance and describe our results in Section 5.5.",
"It is not necessarily clear which to choose, either the Fine-tuned or Pseudo-melody model, which may depend also on the size and diversity of the training and test data.",
"However, one can conclude at least that combining a limited-scale collection of melody-lyrics alignment data with a far larger collection of lyrics-alone data boosts the model's capability of generating a fluent lyrics which structurally fits well the input melody.",
"To investigate the effect of predicting syllable-counts, we compared the performance of the proposed models to models that exclude the syllablecount output layer y s .",
"The middle part of Table 1 summarizes the results.",
"For the pretraining strategy, the use of y s successfully alleviates data sparsity when learning the correlation between syllable counts and melodies from only words themselves.",
"As can be seen, the model without y s shows reduced performance relative to both PPLs and the boundary replication.",
"On the other hand, for the pseudo-melody strategy, the two models are competitive in both measures.",
"This means that the Pseudo-melody model obtained a sufficient amount of word-melody input pairs to learn the correlation.",
"To examine whether the models can capture correlations between rests and boundaries of lyrics, we calculate the proportion of the word, line, and block boundaries in the original lyrics and in the lyrics generated by the Heuristic and Pseudo-melody model for the test set (Figure 7).",
"The proportion of h BOL i and h BOB i generated by the Heuristic model are almost equivalent to those of the original lyrics.",
"On the other hand, for the Pseudo-melody model, the proportion of line/block boundary types for the longer rests are smaller than that of the original lyrics.",
"Although the Heuristic model reproduces the proportion of the original line/block boundaries, the model had a low performance in terms of PPL, 169 0 5 10 15 20 25 30 0 50 100 150 200 N u m b e r o f b l o c k s Syllable counts per block 0 100 200 300 400 500 600 0 10 20 30 40 50 N u m b e r o f li n e s Syllable counts per line JS-divergence: 0.075 JS-divergence: 0.172 JS-divergence: 0.144 JS-divergence: 0.190 0 5 10 15 20 25 30 0 50 100 150 200 N u m b e r o f b l o c k s Syllable counts per block 0 100 200 300 400 500 600 0 10 20 30 40 50 N u m b e r o f li n e s Syllable counts per line 0 5 10 15 20 25 30 0 50 100 150 200 N u m b e r o f b l o c k s Syllable counts per block 0 100 200 300 400 500 600 0 10 20 30 40 50 N u m b e r o f li n e s Syllable counts per line 0 100 200 300 400 500 600 0 5 10 15 20 25 30 35 40 45 50 N u m b e r o f li n e s Syllable counts per line Lyrics in test set Lyrics generated by Heuristic model Lyrics generated by Pseudo-melody model Figure 8: Distribution of the syllable count of the generated lines/blocks Heuristic Lyrics-only Fine-tuned Pseudo-melody Human (Upper-bound) Measure Means SD Median Means SD Median Means SD Median Means SD Median Means SD Median L 2.06 1.08 2 2.33 1.23 2 2.85 1.20 3 2.93 1.14 3 3.56 1.33 4 G 2.28 1.07 2 2.81 1.16 3 2.79 1.06 3 2.97 1.08 3 3.50 1.25 4 LM 2.34 1.07 2 2.91 1.15 3 2.70 1.13 3 2.96 1.09 3 3.49 1.35 4 DM 2.33 1.10 2 2.80 1.06 3 2.59 1.11 3 2.89 1.07 3 3.49 1.30 4 OQ 2.01 1.01 2 2.59 1.15 3 2.42 1.08 2 2.65 1.01 3 3.32 1.19 4 Table 2: Results of the qualitative evaluation.",
"as shown in Section 5.3.",
"By investigating the lyrics generated by the Heuristic model, we found that the model tends to generate line/block boundaries after the melody rest, even if the two rests are quite close.",
"Figure 8 shows the distributions of the syllable per line / block frequency and the distributions of the Jensen-Shannon divergence.",
"While the Heuristic model tends to generate short lines/blocks, our model generates the lyrics so that lines/blocks do not become too short.",
"This result supports that",
"(i) our model is trained using melody and lyric contexts and",
"(ii) the heuristic approach, which simply generates line/block boundaries based on the distribution in Figure 4, cannot generate fluent lyrics with well-formed line/block lengths.",
"To asses the quality of the generated lyrics, inspired by (Oliveira, 2015), we asked 50 Yahoo crowdsourcing workers to answer the following five questions using a five-point Likert scale:",
"Listenability (L) When listening to melody and lyrics, are the positions of words, lines, and segments natural?",
"(1= Poor to 5= Perfect )",
"Grammaticality (G) Are the lyrics grammatically correct?",
"(1= Poor to 5= Perfect )",
"Line-level meaning (LM) Is each line in the lyrics meaningful?",
"(1= Unclear to 5= Clear )",
"Document-level meaning (DM) Are the entire lyrics meaningful?",
"(1= Unclear to 5= Clear )",
"Overall quality (OQ) What is the overall quality of the lyrics?",
"(1= Terrible to 5= Great )",
"For the evaluation sets, we randomly selected four melodies from the RWC Music Database (Goto et al., 2002).",
"For each melody, we prepared four lyrics generated by the Heuristic , Lyrics-only , Fine-tuned , and Pseudo-melody models.",
"Moreover, to obtain an upper bound for this evaluation, we used the lyrics created by amateur writers: we asked four native Japanese speakers to write lyrics on the evaluation melody.",
"One writer was a junior high school teacher of music who had experience in music composition and writing lyrics.",
"Three writers were graduate students with different levels of musical expertise.",
"Two of the three writers had experience with music composition, but none of them had experience with writing lyrics.",
"9 As a result, we obtained 50 (workers) 4 (melodies) 5 (lyrics) samples in total.",
"We note that workers did not know whether lyrics were created by a human or generated by a computer.",
"Table 2 shows the average scores, standard deviations, and medians for each measure.",
"Regarding the Listenability evaluation, workers gave high scores to the Fine-tuned and Pseudo-melody models that are trained using both the melody and lyrics.",
"This result is consistent with the perplexity evaluation result.",
"On the other hand, regarding the Grammat-icality and Meaning evaluation, workers gave high scores to the Lyrics-only and Pseudo-melody models that are well-trained on a large amount of text data.",
"This result is consistent with the result of 9 We release lyrics and audio files used in the qualitative evaluation on the Web ( https://github.com/ KentoW/deep-lyrics-examples ).",
"the boundary replication task.",
"Regarding the Over-all quality evaluation, the Pseudo-melody model outperformed all other models.",
"These results indicate our pseudo data learning strategy contributes to generating high-quality lyrics.",
"However, the quality of lyrics automatically generated is still worse than the quality of lyrics that humans produce, and it still remains an open challenge for future research to develop computational models that generate high-quality lyrics.",
"In the literature, a broad range of research efforts has been reported for computationally modeling lyrics-specific properties such as meter, rhythm, rhyme, stress, and accent Greene et al. (2010); Reddy and Knight (2011); Watanabe et al. (2014, 2016).",
"While these studies provide insightful find-ings on the properties of lyrics, none of those takes the approach of using melody-lyrics parallel data for modeling correlations of lyrics and melody structures.",
"One exception is the work of Nichols et al. (2009), who used melody-lyrics parallel data to investigate, for example, the correlation between syllable stress and pitch; however, their exploration covers only correlations at the prosody level but not structural correlations.",
"The same trend can be seen also in the literature of automatic lyrics generation, where most studies utilize only lyrics data.",
"Barbieri et al. (2012) and Abe and Ito (2012) propose a model for generating lyrics under a range of constraints provided in terms of rhyme, rhythm, part-of-speech, etc.",
"Potash et al. (2015) proposes an RNNLM that generates rhymed lyrics under the assumption that rhymes tend to coincide with the end of lines.",
"In those studies, the melody is considered only indirectly; namely, input prosodic/linguistic constraints/preferences on lyrics are assumed to be manually provided by a human user because the proposed models are not capable of interpreting and transforming a given melody to con-straints/preferences.",
"For generating lyrics for a given melody, we have so far found in the literature two studies which propose a method.",
"Oliveira et al. (2007) and Oliveira (2015) manually analyze correlations among melodies, beats, and syllables using 42 Portuguese songs and propose a set of heuristic rules for lyrics generation.",
"Ramakrishnan A et al. (2009) attempt to induce a statistical model for generating melodic Tamil lyrics from melody-lyrics parallel data using only ten songs.",
"However, the former captures only phonological aspects of melody-lyrics correlations and can generate a small fragment of lyrics (not an entire lyrics) for a given piece of melody.",
"The latter suffers from the severe shortage of data and fails to conduct empirical experiments.",
"This paper has presented a novel data-driven approach for building a melody-conditioned lyrics language model.",
"We created a 1,000-song melody-lyrics alignment dataset and conducted a quantitative investigation into the correlations between melodies and segment boundaries of lyrics.",
"No prior work has ever conducted such a quantitative analysis of melody-lyrics correlations with this size of data.",
"We have also proposed a RNN-based, melody-conditioned language model that generates fluent lyrics whose word/line/block boundaries fit a given input melody.",
"Our experimental results have shown that: (1) our Melody-conditioned RNNLMs capture the consistency between melody and boundaries of lyrics while maintaining word fluency; (2) combining a limited-scale collection of melody-lyrics alignment data with a far larger collection of lyrics-alone data for training the model boosts the model's competence; (3) we have also produced positive empirical evidence for the effect of applying a multi-task learning schema where the model is trained for syllable count prediction as well as for word prediction; and (4) the human judgments collected via crowdsourcing showed that our model improves the quality of generated lyrics.",
"For future directions, we plan to further extend the proposed model for capturing other aspects of lyrics/melody discourse structure such as repetitions, verse-bridge-chorus structure, and topical coherence of discourse segment.",
"The proposed method for creating melody-lyrics alignment data enables us to explore such a broad range of aspects of melody-lyrics correlations.",
"This study utilized the RWC Music Database (Pop-ular Music).",
"This work was partially supported by a Grant-in-Aid for JSPS Research Fellow Grant Number JP16J05945, JSPS KAKENHI Grant Numbers JP15H01702, and JST ACCEL Grant Number JPMJAC1602.",
"The authors would like to thank Dr. Paul Reisert for the English language review."
] | [
"objective",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"objective",
"result",
"objective",
"objective",
"other",
"other",
"other"
] |
[
"In this paper, we investigate improvements to the GEC sequence tagging architecture with a focus on ensembling of recent cutting-edge Transformer-based encoders in Large configurations.",
"We encourage ensembling models by majority votes on span-level edits because this approach is tolerant to the model architecture and vocabulary size.",
"Our best ensemble achieves a new SOTA result with an F 0 .",
"5 score of 76.05 on BEA-2019 (test), even without pretraining on synthetic datasets.",
"In addition, we perform knowledge distillation with a trained ensemble to generate new synthetic training datasets, \"Troy-Blogs\" and \"Troy-1BW\".",
"Our best single sequence tagging model that is pretrained on the generated Troydatasets in combination with the publicly available synthetic PIE dataset achieves a near-SOTA 1 result with an F 0 .",
"5 score of 73.21 on BEA-2019 (test).",
"The code, datasets, and trained models are publicly available.",
"2 1 Introduction The purpose of the Grammatical Error Correction (GEC) task is to correct grammatical errors in natural texts.",
"This includes correcting errors in spelling, punctuation, grammar, morphology, word choice, and others.",
"An intelligent GEC system receives text containing mistakes and produces its corrected version.",
"The GEC task is complicated and challenging: the accuracy of edits, inference speed, and memory limitations are topics of intensive research.",
"Currently, Machine Translation (MT) is the mainstream approach for GEC.",
"In this setting, errorful sentences correspond to the source language, and error-free sentences correspond to the This research was performed during Maksym Tar-navskyi's work on Ms.Sc.",
"thesis at Ukrainian Catholic University (Tarnavskyi, 2021).",
"target language.",
"Early GEC-MT methods leveraged phrase-based statistical machine translation (PBSMT) (Yuan and Felice, 2013).",
"Then this approach rapidly evolved to seq2seq Neural Machine Translation (NMT) based on gated recurrent neural networks (Yuan and Briscoe, 2016) and recent powerful Transformer-based seq2seq models.",
"Transformer-based models autoregressively capture the full dependency among output tokens; however, inference can be slow due to sequential decoding.",
"Grundkiewicz et al. (2019) leveraged a Transformer model (Vaswani et al., 2017) that was pre-trained on synthetic GEC data and right-to-left re-ranking for ensemble.",
"Kaneko et al. (2020) adopted several strategies of BERT (Devlin et al., 2018) usage for GEC.",
"Recently, Rothe et al. (2021) built their system on top of T5 (Xue et al., 2021), a xxl version of the T5 Transformer encoder-decoder model and reached new state-of-the-art results (11B parameters).",
"While still not as widespread as MT, the sequence tagging approach for GEC, which generates a sequence of text edit operations encoded by tags for errorful input text is becoming more common.",
"LaserTagger (Malmi et al., 2019) is a sequence tagging model that casts text generation as a text editing task.",
"Corrected texts are reconstructed from the inputs using three main edit operations: keeping a token, deleting it, and adding a phrase before the token.",
"LaserTagger combines a BERT encoder with an autoregressive Transformer decoder, which predicts edit operations.",
"The Parallel Iterative Edit (PIE) model (Awasthi et al., 2019) does parallel decoding, achieving quality that is competitive with the seq2seq models.",
"3 It predicts edits instead of tokens and iteratively refines predictions to capture dependencies.",
"A similar approach is presented in (Omelianchuk et al., 2020).",
"The GECToR system achieves competitive results using various Trans-3 http://nlpprogress.com/english/ grammatical_error_correction 3842 formers as an encoder; and linear layers with soft-max for tag prediction and error detection.",
"By replacing an autoregressive decoder with linear output layers, it's also potentially several times faster than seq2seq systems.",
"Today, the generation of synthetic data is becoming significant for most GEC models.",
"Natural languages are rich, and their grammars contain many rules and exceptions; therefore, professional linguists are often utilized to annotate high-quality corpora for further training ML-based systems mostly in a supervised manner (Dahlmeier et al., 2013), (Bryant et al., 2019).",
"However, human annotation is expensive, so researchers are working on methods for augmentation of training data, synthetic data generation, and strategies for its efficient usage (Lichtarge et al., 2019), (Kiyono et al., 2019), (Stahlberg and Kumar, 2021).",
"The majority of GEC systems today use synthetic data to pre-train Transformer-based components of their models.",
"In this work, we are focusing on exploring sequence tagging models and their ensembles.",
"Although most of our developments may eventually be applied to other languages, we work with English only in this study.",
"Being a resource-rich language, English is a highly competitive area for the GEC task 3 .",
"Our tagging models are inherited from GECToR (Omelianchuk et al., 2020).",
"To date, GECToR shows near-SOTA results on CoNLL-2014 and BEA-2019 benchmarks.",
"3 It is based on AllenNLP (Gardner et al., 2017) and HuggingFace Transformers (Wolf et al., 2019), and its source code is freely available.",
"4 GECToR is a sequence tagging model that contains a Transformer-based encoder stacked with two output linear layers that are responsible for error detection and error correction.",
"The model is trained with a cross-entropy loss function to produce tags that encode token-level edits.",
"Then iterative postprocessing is performed.",
"GECToR predicts the tag-encoded transformations for each token in the input sequence; it can then apply these transformations to get the modified output sequence.",
"Since some corrections in a sentence may depend on others, applying the GEC sequence tagger only once may not be enough to correct the sentence entirely.",
"Therefore, GECToR uses an iterative correction approach, modifying the sentence by repeatedly running it through the model (up to four times) (Fig. 1).",
"As in GECToR, our primary edit operations are encoded by the following tags: $KEEP (leave the current token unchanged), $DELETE (delete the current token), $APPEND _ t 1 (append the token t 1 after the current token), $REPLACE _ t 2 (replace the current token with the token t 2 ).",
"GECToR also has special edit operations, such as changing the case of a token, changing the verb form to express a different number or tense, or converting singular nouns to plural, and other.",
"We refer to (Omelianchuk et al., 2020) for the details of edit transformations.",
"1. We empirically investigate and improve the GECToR sequence tagging system (Omelianchuk et al., 2020) by upgrading the Transformer encoders to Large configurations, leveraging an advanced tokenizer, performing additional filtering of edits-free sentences, and increasing the vocabulary size.",
"2. We show that the ensembling of sequence taggers by majority votes on output edit spans provides better performance compared to ensembling by averaging output tag probabilities while staying tolerant to the models' architecture and vocabulary sizes.",
"quence taggers.",
"When trained on the distilled data, single GEC tagging models show competitive performance.",
"4. We make the code, datasets, and trained models publicly available.",
"For training single models and ensembles, we use parallel annotated data from the Lang-8 Corpus of Learner English (Lang-8) 5 (Tajiri et al., 2012), the National University of Singapore Corpus of Learner English (NUCLE) 6 (Dahlmeier et al., 2013), the First Certificate in English dataset (FCE) 7 (Yannakoudakis et al., 2011), and the Write & Improve (W&I) Corpus (Bryant et al., 2019).",
"8 Please, see Table 1 for details.",
"For knowledge distillation from the ensemble, we use parts of two monolingual datasets: the One Billion Word Benchmark (1BW) 9 (Chelba et al., 2013) and the Blog Authorship Corpus (Blogs) 10 (Schler",
"5 https://sites.google.com/site/ naistlang8corpora 6 https://www.comp.nus.edu.sg/~nlp/ corpora.html 7 https://ilexir.co.uk/datasets/index. html 8 https://www.cl.cam.ac.uk/research/nl/ bea2019st/data/wi+locness_v2.1.bea19.tar.gz 9 http://statmt.org/wmt11/ training-monolingual.tgz 10 https://www.kaggle.com/rtatman/ blog-authorship-corpus",
"et al., 2005).",
"Corresponding distilled datasets have prefixes \"Troy-\"; see more details about their generation in Section 6.",
"After knowledge distillation for the final training of the student model we also use parallel sentences with synthetically generated grammatical errors from the PIE dataset (Awasthi et al., 2019).",
"11 3.4 Evaluation We report F 0 .",
"5 , P recision , and Recall metrics computed by ERRANT scorer (Bryant et al., 2017) on dev and test datasets from the W&I + LOCNESS Corpus from the BEA-2019 GEC Shared Task (Bryant et al., 2019).",
"In the original GECToR system, the Byte-Pair Encoding (BPE) tokenizer (Sennrich et al., 2016) uses a custom implementation.",
"12 This was chosen because the out-off-the-box AllenNLP tokenizer was too slow, and HuggingFace Transformers' tokenizers did not provide a BPE-to-words mapping.",
"Our work is fully implemented with Transformers from the HuggingFace Transformers library.",
"In particular, we moved to the recently released fast tokenizers from HuggingFace.",
"Now, our encoders have the same tokenizers for fine-tuning as they had for initial pretraining, which leads to better quality after fine-tuning.",
"Our encoder is loaded with its default pretrained weights; the linear layers' weights are initialized with random numbers.",
"Our models are trained by Adam optimizer (Kingma and Ba, 2015) with default hyperparameters.",
"We use a multi-class categorical cross-entropy loss function.",
"The early stopping technique is used: Stopping criteria is 3 epochs without improving the loss function on the dev set, which is a random 2% sample from the same source as training data and is different for each stage.",
"Model training is performed in several stages (Ta-ble 2).",
"In Stage I, the model is pretrained on synthetic datasets; this stage is optional.",
"Then, in Stage II, we carry out warm-up training on the Joint Train Dataset , which contains the Lang-8, NUCLE, FCE, and W&I datasets (Table 1).",
"Thus, we perform coarse fine-tuning on a large amount of diverse GEC data.",
"Datasets are used sequentially with no shuffling.",
"In order not to adversely impact the out-of-box pretrained weights of the encoder, during the first two epochs we train only the linear layers (so-called \"cold epochs\"); later, we make all model's weights trainable.",
"In Stage III, we continue fine-tuning on the W&I Train dataset, which contains only the highest-quality data.",
"Another difference between Stages II and III is the share of edit-free sentences in the training data.",
"We observed that too many sentences in training data without edits lead to reducing the appearance rate of the tagger and deteriorating the overall quality.",
"Therefore, we filter out edit-free sentences from the Joint Train Dataset, which is used in Stage II.",
"In Stage III, we fine-tune the model on the unfiltered version of the W&I Train dataset.",
"The final stage is inference tweaks (Omelianchuk et al., 2020) for balancing between the model's precision and recall.",
"This is done by introducing additional hyperparameters: additional confidence (AC) to the probability for the $KEEP tag and minimum error probability (MEP) for corrections tags.",
"These hyperparameters are found via a random search on the BEA-2019 dev set.",
"In the GECToR paper (Omelianchuk et al., 2020), authors investigated encoders from ALBERT (Lan et al., 2020), BERT (Devlin et al., 2018), GPT-2 (Radford et al., 2018), RoBERTa (Liu et al.,",
"2019), and XLNet (Yang et al., 2019) Transformers in their Base configurations.",
"Most likely, Base configurations were chosen due to the better inference speed/quality ratio.",
"They found that XLNet, RoBERTa, and BERT show the best performance.",
"We reproduce experiments for these encoders, but now we explore Large configurations as well.",
"We additionally explore encoders from DeBERTa (He et al., 2020) (Table 3).",
"We observe that all models that are equipped with Large encoders have higher precision, recall, and F 0 .",
"5 values than those equipped with their Base versions.",
"The price of this performance is 2.32.5 times slower inference for Large configurations (Table 4).",
"The single model with RoBERTa encoder shows the best performance for Large configurations, whereas DeBERTa slightly outperforms RoBERTa for Base configurations.",
"RoBERTa is the fastest in both configurations.",
"Most of the tag-encoded edits are token-specific, e.g., $APPEND_it , $REPLACE_the , and so on.",
"Thus, the tag vocabulary size matters, and should be a tradeoff between coverage and model quality.",
"We create the tag vocabulary by taking the most frequent edit tags generated from the Joint Train Dataset (Table 1).",
"To find the optimal tag vocabulary sizes, we experiment with {5K, 10K} vocabulary sizes (Table 5).",
"We observe that increasing the vocabulary size to 10K for Large encoders may improve the quality, e.g. for models with RoBERTa and DeBERTa.",
"Nevertheless, we also see an example of quality deterioration for the model with XLNet.",
"Ensembling is a proven quality-boosting method for models sets that have diverse outputs.",
"Most of the recent GEC solutions achieved their best results by ensembling single models (Stahlberg and Kumar, 2021), (Omelianchuk et al., 2020), (Awasthi et al., 2019).",
"In this section we consider two ensembling methods for our GEC tagging models: averaging of output tag probabilities and majority votes on output edit spans (Fig. 2).",
"First, we reproduce the ensembling approach from (Omelianchuk et al., 2020).",
"We add DeBERTa and carry out experiments with varying Base and Large configurations of encoders (Table 6).",
"We observe that ensembling by averaging of output tag probabilities improves the quality of corrections; the more models we combine, the better results we obtain.",
"More surprisingly, combining Ensemble P R F 0 .",
"the same encoders' architectures in Base and Large configurations may provide slightly better results than we get for the Base and Large models separately (see RoBERTa ( B ) + RoBERTa ( L ) in Table 6).",
"Although the ensemble RoBERTa ( L ) + BERT ( L ) + DeBERTa ( L ) + XLNet ( L ) shows the best performance, we select ensemble the RoBERTa ( L ) + DeBERTa ( L ) + XLNet ( L ) for further experiments.",
"It has higher recall, making it possible to trade recall for precision later during inference tweaks.",
"This aggregation method combines single models' outputs in the post-processing step (Fig. 2).",
"We take span-level edits and retain only those which have most of the votes from the ensemble.",
"A similar approach is used in (Liang et al., 2020), where the authors combined sequence tagging and seq2seq models for the Chinese language.",
"The advantage of this ensembling method is that we can combine the results of models with different output dimensions and even different architectures.",
"In our work, it allows us to combine models with different tag vocabulary sizes.",
"We leave ensembling with seq2seq GEC systems for future work.",
"First, we compare ensembling by averaging of output tag probabilities + and by majority votes on output edit spans for the selected ensemble after training on the Joint Train Dataset (Stage II), finetuning on the W&I dataset (Stage III) and optimization of hyperparameters (inference tweaks) (Table 7).",
"We observe that ensembles based on 3846 majority votes on output edit spans show better results because of better precision.",
"However, after inference tweaks, the two ensembling types achieve close F 0 .",
"5 scores.",
"To additionally improve the precision of ensembling by majority votes we introduce the \"majority quorum\" hyperparameter N min .",
"Majority quorum N min denotes minumum number of votes for triggering the edit , here 1 N min N single _ models .",
"Increasing N min boosts precision by the cost of recall because it filters out more edits where single models disagree (Table 8).",
"Setting N min = 1 is a poor strategy because we can't rely on a majority when resolving conflicting edits, so the resulting text might contain controversial and incoherent edits.",
"Increasing the number of systems in the ensemble leads to higher quality, but requires adapting the N min parameter (Table 8).",
"Based on this limited analysis we observe that N min = N single _ models 1 achieves the best results.",
"For our pool of models there is no gain over using more than 4 models, but we want to explore adding more diverse seq2seq models to such an ensemble in future works.",
"Next, since the majority votes on output edit spans is capable of combining any models, we test the ensemble of the best models that we already have trained (Table 9).",
"Finally, we evaluate our best ensemble DeBERTa ( L ) 10 K RoBERTa ( L ) 10 K XLNet ( L ) 5 K on the BEA-2019 (test) dataset and achieve F 0 .",
"5 score of 76 .",
"05 .",
"This is a significant improvement over F 0 .",
"5 = 73 .",
"70 for the best ensemble from (Omelianchuk et al., 2020) and to the best of our knowledge is a new state-of-the-art (SOTA) result for ensembles on the BEA-2019 (test) benchmark .",
"obtained without pre-training on synthetic data.",
"Knowledge distillation is the method for transferring knowledge from a large model (\"teacher\") to a smaller one (\"student\") (Hinton et al., 2015), (Kim and Rush, 2016).",
"It has strong practical applications because large models usually have expensive inference costs and are inconvenient for deployment.",
"In our case, the teacher model is an ensemble of trained sequence taggers, whereas the student model is a single sequence tagger.",
"The ensemble receives errorful texts and generates their corrected versions.",
"Later these input-output pairs of sentences are used for training single models.",
"Like any synthetic annotation method, knowledge-distilled data contains a certain share of systematic errors that deteriorates the student model's quality.",
"In this work, we use two monolingual corpora to generate our distilled datasets: the One Billion Words Benchmark (\"1BW\"), which mostly contains news texts, and the Blog Authorship Corpus (\"Blogs\"), which contains blog texts on various topics (Table 1).",
"Being real-world natural texts, these datasets contain a certain share of grammatical errors, which are corrected by our system.",
"For text pre-processing, we use the tokenizer from Spacy.",
"13 As a teacher, we use the ensemble of the sequence taggers containing Large encoders with a 5K vocabulary: DeBERTa ( L ) 5 K + RoBERTa ( L ) 5 K + XLNet ( L ) 5 K (Table 7).",
"The ensemble corrects 5% of processed sentences in 1BW and 28% of sentences in Blogs.",
"Distilled versions of the datasets have the prefix \"Troy-\" in their names (Table 1).",
"Considering our past experience, we use only edited sentence pairs in our distilled datasets, and we limit their number to 1.2M.",
"We also reduce the synthetic PIE dataset from (Awasthi et al., 2019) to 1.2M sentence pairs for better comparability in the experiments.",
"We leave exploring other ensembles in the role of a teacher model for future research.",
"First, we reproduce the training scheme from (Omelianchuk et al., 2020) for a single model, RoBERTa ( L ) 5 K where PIE synthetic data is used for 13 https://spacy.io/",
"pre-training (Stage I), then the model is trained on the Joint Train Dataset (Stage II), fine-tuned on the high-quality W&I dataset (Stage III), and finally, hyperparameters are applied to balance precision and recall (inteference tweaks).",
"We observe that the sequence tagger with a RoBERTa-Large encoder shows slightly better performance than RoBERTa-Base from (Omelianchuk et al., 2020), where RoBERTa-Base had an 8x larger training dataset in Stage I (Fig. 3).",
"Next, we replace the synthetic PIE dataset with our distilled datasets, Troy-1BW and Troy-Blogs.",
"We observe that in Stage I, training on purely synthetic data leads to a dramatic boost in recall.",
"When we start training in Stage II, a sharp deterioration in both precision and recall occurs.",
"It seems that the student model does not receive new information compared to Stage I. This is more noticeable for models trained on the Troy-Blogs dataset, where recall significantly drops after training.",
"However, the F 0 .",
"5 in Stage II is higher for models pretrained on distilled Troydatasets.",
"Finally, after training on Stage III and performing inference tweaks, single models pretrained on both datasets show very similar performance, but the model with RoBERTa ( L ) 5 K trained on Troy-Figure 3: Pre-training of single tagging models on synthetic and distilled datasets with a tag vocabulary size of 5K.",
"Benchmark is BEA-2019 (dev).",
"1BW is slightly higher-performing.",
"This single model reaches F 0 .",
"5 = 73 .",
"21 on BEA-2019 (test), a significant improvement on the results from (Omelianchuk et al., 2020) for single models F 0 .",
"5 = 71 .",
"5 for RoBERTa ( B ) 5 K and F 0 .",
"5 = 72 .",
"4 for XLNet ( B ) 5 K .",
"We observed that models pretrained on the Troy-Blogs dataset show good results on Stage I, but lose their advantage after training on Stage II.",
"Thus, we decided to try a one-stage training approach with a RoBERTa ( L ) 5 K encoder.",
"For our training dataset, we concatenated Troy-Blogs with high-quality W&I dataset that we usually reserve for Stage III.",
"As a result, we achieved F 0 .",
"5 = 55 .",
"81 on BEA-2019 (dev) and F 0 .",
"5 = 72 .",
"69 on BEA-2019 (test) (Table 10).",
"These results are obtained much more easily than with our best single model: just one-stage training with out-of-the-box RoBERTa , no pre-training on synthetic GEC data or multi-stage training.",
"Our research investigates the impact of encoder configurations, ensembling methods, and knowledge distillation on the GECToR system.",
"We found that Replacing Base encoders in GECToR (Omelianchuk et al., 2020) with their Large configurations does improve the quality by several F0.5 points, at the cost of 2.32.5 times slower inference.",
"Our best ensemble achieves a new SOTA result with F 0 .",
"5 = 76 .",
"05 on BEA-2019 (test).",
"Ensembling sequence taggers by majority votes on output edit spans provides better performance than averaging output tag probabilities because it lets us combine a variety of modeling approaches and vocabulary sizes.",
"Single models in the ensemble were not pre-trained on synthetic GEC datasets, providing room for improvement in future work.",
"to an ensemble of sequence taggers and produce the annotated Troy-Blogs and Troy-1BW datasets.",
"After training on these datasets, single GEC sequence tagging models show near-SOTA results: F 0 .",
"5 = 73 .",
"21 / 72 .",
"69 on BEA-2019 (test) for multi-stage/one-stage training.",
"To our knowledge, our best single model is outperformed only by the much more compute-intensive T5 XXL model (Rothe et al., 2021), which is 30 times larger with 11B parameters (Table 10).",
"We make the code, datasets, and trained models publicly available.",
"14 8 Acknowledgements We express our gratitude to Oleksii Molchanovskyi, Dmytro Lider, Viktor Zamaruiev, Paige Schwartz, the Ukrainian Catholic University, and Grammarly for providing support and computational resources.",
"We also thank anonymous reviewers for their contributions.",
"To our communities: While we are writing this, our homeland Ukraine continues to resist the unprovoked Russian invasion.",
"We are grateful to everyone who defends Ukraine, declares support to the people of Ukraine, and is sending aid.",
"Thank you!"
] | [
"objective",
"method",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain"
] |
[
"An educated and informed consumption of media content has become a challenge in mod-ern times.",
"With the shift from traditional news outlets to social media and similar venues, a major concern is that readers are becoming encapsulated in echo chambers and may fall prey to fake news and disinformation, lacking easy access to dissenting views.",
"We suggest a novel task aiming to alleviate some of these concerns that of detecting articles that most effectively counter the arguments and not just the stance made in a given text.",
"We study this problem in the context of debate speeches.",
"Given such a speech, we aim to identify, from among a set of speeches on the same topic and with an opposing stance, the ones that directly counter it.",
"We provide a large dataset of 3685 such speeches (in English), annotated for this relation, which hopefully would be of general interest to the NLP community.",
"We explore several algorithms addressing this task, and while some are successful, all fall short of expert human performance, suggesting room for further research.",
"All data collected during this work is freely available for research 1 .",
"Recently, a publication on Quantum Computing described a quantum computer swiftly performing a task that arguably would require 10,000 years to be solved by a classical computer (Arute et al., 2019).",
"A non-expert reader is likely to consider this claim as a hard-proven fact, especially due to the credibility of the venue in which this publication appeared.",
"Shortly afterwards, a contesting blog written by other experts in that field 2 1 https://www.research.ibm.com/ haifa/dept/vst/debating_data.shtml#DebateSpeechAnalysis 2 https://www.ibm.com/blogs/research/ 2019/10/on-quantum-supremacy/ argued, among other things, that the aforementioned problem can be simulated on a classical computer, using proper optimizations, in 2 .",
"5 days.",
"Clearly, out of potentially many texts questioning the promise of Quantum Computers (e.g. Kalai (2019)), making readers of the former publication aware of that specific blog post, which directly contests the claims argued in that publication, will provide them with a more informed view on the issue.",
"Broadly, argumentative texts, such as articles that support a certain viewpoint, often lack arguments contesting that viewpoint.",
"This may be because those contesting arguments are not known to the author of the text, as they might not even have been raised at the time of writing.",
"Alternatively, authors may also deliberately ignore certain known arguments, which might undermine their argumentative goal.",
"Regardless of the reason, this issue places readers at a disadvantage.",
"Lacking familiarity with opposing views that specifically challenge a given perspective , may lead to uninformed decisions or establishing opinions based on partial or biased information.",
"Therefore, there is merit to developing a system that can automatically detect such opposing views.",
"Motivated by this scenario, we propose a novel natural language understanding task: Given an input text and a corpus, retrieve from that corpus a counter text which includes arguments contesting the arguments raised in the input text.",
"While contemporary systems allow fetching texts on a given topic, and can employ existing tools to discern its stance and so identify texts with an opposing view they lack the nuance to identify the counter text which directly contests the arguments raised in the input text.",
"The potential use-cases of the proposed system exist in several domains.",
"In politics, it can present counters to partisan texts, thus promoting more informed and balanced views on existing controversies.",
"In social media, it can alleviate the bias caused by the echo chamber phenomenon (Garimella et al., 2018), by introducing opposing views.",
"And in the financial domain, it can potentially help analysts find relevant counter-texts to predictions and claims made in earning calls.",
"It may also help authors to better present their stance, by challenging them with counter texts during their writing process.",
"Lastly, it may aid researches to examine relevant citations by annotating which papers, out of potentially many, hold opposing views.",
"Note, however, that this paper focuses on counter text detection a useful tool for these worthy goals, but not a complete solution.",
"To pursue the aforementioned task, one needs a corresponding benchmark data, that would serve for training and evaluating the performance of an automatic system.",
"For example, one may start with an opinion article, find a set of opinion articles on the same topic with an opposing stance, and aim to detect those that most effectively counter the arguments raised in the opinion article we started with.",
"This path represents a formidable challenge; for example, reliable annotation of long texts is notoriously difficult to obtain (Lavee et al., 2019a), to name just one reason out of many.",
"To overcome this issue, here we focus on a unique debate setup, in which the goal of one expert debater is to generate a coherent speech that counters the arguments raised in another speech by a fellow debater.",
"Specifically, as part of Project Debater 3 , we collected more than 3,600 debate speeches, each around four minutes long, recorded by professional debaters, on a wide variety of controversial topics, posed as debate motions (e.g. we should ban gambling ).",
"With this paper, we make this data available to the community at large.",
"Each motion has a set of supporting speeches, and another set of opposing speeches, typically recorded in response to one and only one of the supporting speeches.",
"Correspondingly, our task is defined as follows.",
"Given a motion, a supporting speech, and a set of candidate opposing speeches discussing the same motion, identify the opposing speeches recorded in response to the supporting speech.",
"We analyze human performance on this chal-3 https://www.research.ibm.com/ artificial-intelligence/project-debater/ lenging task, over a sample of speeches, and further report systematic results of a wide range of contemporary NLP models.",
"Our analysis suggests that expert humans clearly outperform the examined automatic methods, by employing a potentially non-trivial mix of heuristics.",
"In summary, our main contributions are as follows: (1) Introducing a novel NLU task, of identifying the long argumentative text that best refutes a long argumentative text given as input.",
"(2) Suggesting to simulate the proposed general task in a well-framed debate setup, in which one should identify the response speech(es) that rebuts a given supporting speech.",
"(3) Sharing a large collection of more than 3,600 recorded debate speeches, that allow to train and evaluate automatic methods in our debate-setup task.",
"(4) Providing empirical results for a variety of contemporary NLP models in this task.",
"(5) Establishing the performance of humans in this task, conveying that expert humans currently outperform automatic methods.",
"Most similar to our work is the task of retrieving the best counter argument to a single given argument (Wachsmuth et al., 2018), also within the debate domain.",
"However, in that setting counterar-guments may discuss different motions, or have the same stance towards one motion.",
"In our setting, identifying speeches discussing the same motion can be done using existing NLP methods, and being of opposing stances may be explored with various sentiment analysis techniques.",
"Our focus is on identifying the response to a supporting speech within a set of opposing speeches, all discussing the same motion .",
"Other than the different setup, our task also handles a more complex premise speeches which are substantially longer than any single argumentative unit, and include multiple such units.",
"An alternative to our approach is breaking the problem into three stages: (1) identifying specific arguments made in each debate speech; (2) establishing counterargument relations between such arguments found in different speeches; (3) choosing the best response speech based on these argument-level relations.",
"The first subproblem has been recently explored in Mirkin et al. (2018b); Lavee et al. (2019b); Orbach et al. (2019).",
"The second is related to a major research area within computational argumentation (see recent surveys by Cabrio and Villata (2018); Lawrence and Reed (2019)).",
"Such research includes detecting attack relations between arguments (Cabrio and Villata, 2012; Rosenthal and McKeown, 2015; Peldszus and Stede, 2015b; Cocarascu and Toni, 2017; Wachsmuth et al., 2018), modeling them (Sridhar et al., 2015), depicting these relations (Walker et al., 2012; Peldszus and Stede, 2015a; Musi et al., 2017), generating counter-arguments (Hua and Wang, 2018; Hua et al., 2019), and establishing a theoretical framework for engagement (Toulmin, 2003; Govier, 1991; Dung, 1995; Damer, 2009; Walton, 2009).",
"A major drawback of the above approach is that it requires a considerable labeling effort the annotation of arguments mentioned within speeches which has been shown to be a challenge (Lavee et al., 2019a).",
"Another is that the methods in the above studies which focus on establishing relations at the individual argument level may be limited when aiming to evaluate the perspective of long texts.",
"Specifically, a response speech may contain multiple arguments that relate to the supporting speech in different ways.",
"For instance, the speaker in such a speech may choose to concede an argument, while still maintaining an opposite view.",
"Therefore simply mapping argument level relations may fall short when trying to generalize and assess full speeches.",
"Our task complements the above endeavors by facilitating a framework that would allow extending their granularity from the argument level to a full-text level.",
"Also, our main motivation is different detecting whole long counter speeches, and not the exact counter arguments within the counter speech.",
"The latter, perhaps more challenging goal, is out of scope for this work.",
"New neural models have recently driven performance improvements across many NLP tasks (Devlin et al., 2018; Radford et al., 2018), surpassing the level of non-expert humans in a diverse set of benchmark tasks (Wang et al., 2018; McCann et al., 2018).",
"To facilitate the progress of further research Wang et al. (2019) introduced a benchmark aiming to pose a new series of rigorous tests of language understanding which are challenging for cutting-edge NLP technologies.",
"Our work is consistent with the motivation behind these benchmarks, as it suggests a challenging new NLU task, accompanied by a corresponding dataset and benchmarks.",
"The rise of deliberate disinformation, such as fake news, highlights the erosion in the credibility of consumed content (Lazer et al., 2018), and situations where one is exposed only to opinions that agree with their own, as captured by the notion of echo chambers, are becoming more prevalent (Garimella et al., 2018; Duseja and Jham-tani, 2019).",
"The task proposed in this work seems timely in this context.",
"We now detail the process of collecting the speeches, the structure of the dataset, and how it is used for our task.",
"Dataset structure Each speech in the dataset discusses a single motion and is either a supporting speech in which a single speaker is arguing in favor of the discussed motion, or an opposing speech in which the speaker is arguing against the motion, typically in response to a supporting speech for that motion.",
"As described below, debaters recording an opposing speech typically listen to a given recorded supporting speech, and then design and record their own speech in response to it.",
"This counter speech is either explicit including a rebuttal part in which the speaker directly addresses arguments raised in the rebutted speech, or implicit including no such dedicated rebuttal section, but tacitly relating to the issues raised in the supporting speech they respond to.",
"The data contains multiple counter speeches to each supporting speech, among which some, none or all may be explicit or implicit.",
"Figure 1 depicts the structure of this dataset.",
"Examples of one explicit and one implicit counter speeches are included in the Appendix.",
"Recording speeches The supporting speeches were produced by a team of professional debaters, using a procedure similar to the one described in Mirkin et al. (2018a): The debaters were each given a list of motions, accompanied by relevant background materials (taken from an online resource such as Wikipedia).",
"They were allowed ten minutes of preparation time to review a motion's background material, after which they recorded a speech arguing in favor of that motion, which was around four minutes long.",
"Through this process, 1797 supporting speeches were recorded, discussing 460 motions.",
"To record an opposing speech, the debaters were Op. 1 Op. 2 Op. 3 Op. 4 Op. 5 Sup.",
"first given ten minutes to review the background material for the motion, as in the recording of a supporting speech.",
"Then, they listened to a supporting speech (recorded by a fellow debater) and recorded a counter speech of similar length.",
"Due to different debate styles popular in different parts of the world, some debaters recorded explicit counter speeches while others recorded implicit ones.",
"To expedite the pace of the recording process, towards its end, few opposing speeches were recorded without requiring the debater to respond to a specific supporting speech.",
"Instead, the debaters were instructed to think of supporting arguments themselves, and respond to these arguments.",
"In total, 1887 opposing speeches were recorded: 348 are explicit counters, 1389 are implicit, and the other 150 are not the counter speech of any supporting speech.",
"The full guidelines used by the debaters during the recordings are included in the Appendix.",
"The recorded audios were automatically transcribed into text using Watson's off-the-shelf Automatic Speech to Text (STT) 4 .",
"Human transcribers listened to the recorded speeches, and manually corrected any errors found in the transcript texts produced by the STT system.",
"On average, each speech transcript contains 28 .",
"2 sentences, and averages 738 .",
"6 tokens in length.",
"For the purpose of this work, the manually-corrected transcripts are used.",
"The full data of 3685 speeches, including the recorded audios, the STT system outputs and the manually-corrected 4 https://www.ibm.com/cloud/ watson-speech-to-text transcripts are available on our website 5 .",
"For comparison, the previous release of Project Debater's speeches dataset (Lavee et al., 2019b) included a smaller subset of 400 speeches.",
"Further details on the format of the full data and the recordings process are available in Mirkin et al. (2018a).",
"Usage As noted above, our task input is comprised from a supporting speech and several candidate opposing speeches all discussing the same motion.",
"Some candidates are counters of the supporting speech, and others are typically counters of other supporting speeches for the same motion.",
"The goal is to identify those counter speeches made in response to the supporting speech.",
"Opposing speeches produced by the speaker of the supporting speech were excluded from the candidates set, as in the real world it is unexpected for one to simultaneously support both sides of a discussion.",
"Recently, with deep learning techniques achieving human performance on several NLU tasks, and even surpassing it, there is growing interest in raising the bar (Wang et al., 2019).",
"That is, to facilitate advancing NLU beyond the current state-of-the-art, there is a need for novel tasks which are solvable by humans, yet challenging for automatic methods.",
"To assess our proposed task in this context, we performed an annotation experiment, as described below.",
"Setup Each question presented one supporting speech and between 3 to 5 candidate opposing speeches, all discussing the same motion.",
"Annotators were instructed to read the speeches, and select one opposing speech which they thought was a counter speech of the supporting speech.",
"When they could not identify such a counter, they were asked to guess and mention that they had done so.",
"60 questions were randomly sampled and given to 3 English-proficient expert annotators, who have successfully worked with our team in other past annotation experiments.",
"Following their choice of a counter speech, they were asked to explain their choice in free form language.",
"Following this step, one of the authors read the explanations provided by the experts and formed 5 https://www.research.ibm.com/ haifa/dept/vst/debating_data.shtml#DebateSpeechAnalysis All Explicit Implicit A R A R A R Ex 85 .",
"a set of reason categories.",
"Then, another 60 questions were sampled and given to 3 crowd annotators, using the Figure-Eight 6 crowdsourcing platform.",
"The crowd annotators were from a dedicated group which regularly participates in annotations done by our team.",
"After choosing a counter speech, they were instructed to choose the reason (or multiple reasons) for their choice from the set of reason categories.",
"The crowd payment was set to 2 .",
"5$ per question.",
"To encourage thorough work, a post-processing bonus was given for each correct answer, doubling that pay.",
"The full guidelines given to the expert and crowd annotators are provided in the Appendix.",
"Results Performance was evaluated by calculating the accuracy of each annotator, and averaging over annotators.",
"These results are presented in Table 1. Overall, the experts obtained an average accuracy of 86 % ( Ex row), considerably better than randomly guessing the answer which yielded an accuracy of 31 %.",
"The accuracy of the crowd annotators ( Cr ) was lower, yet distinctly better than random.",
"This suggests that the task is difficult, and may require a level of dedication or expertise beyond what is common for crowd-annotators.",
"Fortunately, the dataset is constructed in such a way that human annotation is not required to label it it is clear by design which opposing speech counters which supporting speech.",
"To establish whether identifying explicit counters is easier than identifying implicit ones, the average annotator accuracy was separately computed for these two types.",
"Noteworthy, the accuracy of the experts drops from a near perfect score of 92 % on questions with an explicit true counter, to 76 % on questions with an implicit one.",
"Some of the drop may be explained by the smaller chance of guessing the correct answer at random over 6 www.figure-eight.com 0 0.05 0.1 0.15 0.2 0.25 0.3 % o f r e a s o n s Correct Wrong Figure 2: The distribution of reasons for the correct and wrong answers of crowd annotators (who overall had accuracy 60 % ).",
"this set, but not all 7 .",
"This suggests that, as may be expected, identifying implicit counter speeches is more challenging than identifying an explicit counter.",
"Still, the performance of both types of annotators, over both types of speeches, was better than random.",
"Reasons analysis The explanations provided by the experts revealed several best-practices for this task, which we categorized as follows: The true counter speech quote s a phrase from the supporting speech; mention s a specific case or argument from the supporting speech; is more comprehensive and addresses more issues raised in the supporting speech than the other candidates; addresses those issues in the same order as they appear in the supporting speech; discusses similar issues; deals with the main issue raised in the supporting speech.",
"Another reason was elimination discarding the other candidates since they responded to issues or arguments which were not raised in the supporting speech.",
"The last two categories were guess and other (which required writing a reason in free form language).",
"Focusing on crowd annotators who did the task relatively well (accuracy 60 % ), Figure 2 presents the distribution of the reasons they gave for their answers, separated between cases when they were correct and when they were wrong.",
"Overall, the reasons distribution suggests that correctly solving this task requires balancing between the various heuristics.",
"While some of these reasons, such as similarity , correspond to existing algorithmic ideas, others (e.g. order or main issue ) 7 Suppose that when answering, annotators answer correctly a fraction f of the time, and guess 1 f of the time, with probability of success equal to the random baseline.",
"Then in the explicit case f = 0 .",
"87 and in the implicit f = 0 .",
"67 .",
"could inspire future research.",
"Having established that experts perform well on this task, the question remains whether present NLP methods can match that performance.",
"Data A supporting speech was included in the experiments if",
"(a) there was an opposing speech addressing it; and",
"(b) there was at least one additional opposing speech discussing its motion which was produced either in response to another supporting speech, or without responding to any specific supporting speech.",
"Supporting speeches not meeting these criteria were excluded from the analysis.",
"With these criteria, the data used in the experiments comprised 1102 supporting speeches and 1708 opposing speeches, pertaining to 329 motions.",
"their speeches were partitioned accordingly.",
"Settings To separately evaluate the ability to detect explicit and implicit counters, the experiments were performed in three settings.",
"The first utilized the entire data given a supporting speech, all of the opposing speeches discussing its motion were considered as candidate counters.",
"In the second setting, the true counter speeches were limited to explicit counters.",
"Supporting speeches without any explicit counter were excluded.",
"Similarly, in the last setting, the true counter speeches were limited to implicit counters, and supporting speeches without such counters were excluded.",
"For example, a supporting speech with one explicit counter, one implicit counter and whose motion is associated with two other opposing speeches (which are not its counters), is considered with all four opposing speech candidates in the first setting and three such candidates in the second and third settings the two non-counters and the one counter of the type relevant to the setting.",
"Table 2 details the statistics of each data split and experimental setting.",
"Evaluation The methods described next score each of the candidate counters.",
"We report the average accuracy of the top predictions ( A ) and the average mean reciprocal rank ( M ), defined as 1 /r where r is the highest rank of a true counter.",
"Document similarity Our first method represented speeches as bag-of-terms vectors, where terms are stemmed unigrams appearing in at least 1 % of the speech-pairs in the training set, and the term counts are normalized by the total count of terms in the speech.",
"Given two vectors, their similarity was computed using the Cosine similarity ( Cos ) or the inverse Jensen-Shannon divergence ( JS ).",
"Similarity and Dissimilarity Wachsmuth et al. (2018) presented a method for retrieving the best counter argument to a given argument, based on capturing the similarity and dissimilarity between an argument and its counter.",
"At its core, their method is based on two similarity measures between pairs of texts:",
"(i) A word-based similarity, which is defined by the inverse Manhattan distance between the normalized term frequency vectors of the texts (where terms were as mentioned above);",
"(ii) An embeddings-based similarity which used pretrained ConceptNet Number-batch word embeddings (Speer et al., 2017) to represent the words of the texts, averaged those embeddings to obtain a vector representing each text, and calculated the inverse Word Mover's distance (Kusner et al., 2015) between these vectors.",
"Previously, these measures were used to predict the relations between a pair of argumentative units.",
"Since our speeches may contain multiple arguments, and their location within the text is unknown, we defined this method at the speech level by considering every supporting speech sentence and every candidate counter speech sentence.",
"For each measure, the similarities of one supporting speech sentence to all candidate counter speech sentences were aggregated by applying a function f , yielding a sentence-to-speech similarity.",
"These sentence-to-speech similarities were aggregated using another function g , yielding a speech-to-speech similarity.",
"We denote these speech-to-speech measures by w fg for word-based similarities and e fg for embedding-based similarities.",
"As aggregation functions, the maximum ( ), minimum ( ), average ( + ) and product ( ) were considered.",
"For example, w + denotes taking the maximal word-based similarity of each supporting speech sentence to all candidate counter speech sentences, and averaging those values.",
"where sim and dissim are of the form w fg + e fg , both f and g are aggregation functions, sim (cid:54) = dissim and is a weighting factor.",
"In this scoring model sim aims to capture topic similarity, whereas subtracting dissim seeks to capture the dissimilarity between arguments from opposing stances.",
"Admittedly, this method is more appropriate for some of the settings explored in Wachsmuth et al. (2018), in which the candidate counter arguments to a given argument may be discussing other topics, and their stance towards the discussed topic is unknown.",
"We include their method here for completeness, and to allow a comparison to their work.",
"The hyper-parameters, namely, the aggregation functions and the value of (from the range { 1 , 0 .",
"9 , 0 .",
"8 } used by Wachsmuth et al. (2018)) were tuned on the validation set.",
"An additional variant ( SD-e ) based solely on the embeddings-based similarity was also considered, since it carries the advantage of not requiring any vocabulary to be derived from the training set.",
"This allowed tuning the hyper-parameters on a larger set comprised from both the training and validation sets.",
"BERT Devlin et al. (2018) presented the BERT framework which was pre-trained on the masked language model and next sentence prediction tasks.",
"Assuming that an argument and its counter are coherent as consecutive sentences, and that the first sentences of the candidate speech reference the last sentences of the supporting speech, those parts were scored using the pre-trained next-sentence prediction model with ( BERT-T ) and without ( BERT ) fine-tuning.",
"The considered sentences from each speech were limited to at most 100 words, since the pre-trained model is limited to 512 word pieces (assuming about two word pieces per word).",
"Specifically, from the first speech we took the greatest number of sentences from the end of the speech such that their total length was less than 100 words, and similarly for the second speech for its starting sentences.",
"For fine-tuning, we used the supporting speeches with each of their true counter speeches as positive sentence pairs, and added an equal number of negative pairs where the supporting speech appears with a randomly sampled opposing speech that is not its counter.",
"ngram-based The methods described so far assign a score to a supporting speech and a candidate counter without considering the other candidates.",
"Using that content can aid in detecting key phrases or arguments which best characterize the connection between the supporting speech and its counter these are the ones which are shared between those speeches and are not mentioned in any of the other candidates.",
"Having many such phrases or arguments may be an indication that a candidate is a true counter speech.",
"Indeed, the quote and mention reason categories account for more than 20 % of the reasons selected by the crowd annotators when answering correctly (see Table 2).",
"To capture this intuition, ngrams containing between 2 to 4 tokens were extracted from each speech.",
"Those containing stopwords, and those fully contained within longer ngrams, were removed.",
"The set of ngrams which appear in both the supporting speech and the candidate but not in any of the other candidates was calculated, and the total length of the ngrams it contains was used as the score of the candidate ( ngrs ).",
"Mutual Information The speeches were represented as bag-of-terms binary vectors, where the terms are stemmed unigrams (excluding stop-words).",
"Each candidate counter was scored using the mutual information between its vector and the vector of the supporting speech ( MI ).",
"In addition, the mutual information between those vectors, conditioned by the presence of terms in the other candidate counters ( c-MI ), was calculated as follows.",
"Let v s be a vector representing a supporting speech and { v c } nc =1 be a set of n vectors representing its candidate counters.",
"Let c be such a candidate counter, and o c represent the concatenation of the vectors of the other candidates excluding c .",
"Let v c | k denote the vector of values from v c at the indices where the entries of o c are k (for k = 1 or 0 ) , and let v s | k be defined similarly.",
"Then, the conditional mutual information of the candidate c is given by 1 (cid:88) k =0 p ( k ) I ( v s | k ; v c | k ) where p ( k ) is the percentage of entries of o c with the value k , and I ( , ) is mutual information.",
"Intuitively, this measure aims to quantify the information shared between a supporting speech and a candidate, after observing the content of all other candidates, and thus is similar in spirit to the ngram-based method mentioned above.",
"Table 3 presents the results obtained by the different methods in our three experimental settings.",
"These results show that there is a large performance gap between the implicit and explicit settings in favor of the latter for all methods (ex-cept BERT ), suggesting it is an easier setting.",
"This is consistent with the results of our annotation experiment.",
"While the best performing methods ( JS and c-MI ) surpass the performance of individual crowd annotators (see Table 1), which testifies to the dif-ficulty of the annotation task, the human experts clearly do better, suggesting there is still much room for improvement.",
"Error analysis We have manually analyzed the top 3 implicit and explicit speeches for which the differences in mutual information between the predicted counter speech and the true counter speech were the greatest.",
"Analysis revealed that such counter speeches are characterized by argumentative material that is thematically similar to the material of the input speech.",
"Depending on the use case, such results are not necessarily errors, since if the goal is to find relevant opposing content it is beneficial to present such speeches, even All Explicit Implicit Method A M A M A M JS 51 .",
"if they were not authored in response to the input speech.",
"However, in some instances a thematically similar argument may be an irrelevant counter as arguments can share a theme without being opposing.",
"For example, an input text may discuss an argument pertaining to the rights of a disenfranchised group, while the counter may revolve around pragmatic outcomes to the same disenfranchised group.",
"While these arguments are likely to share the theme of disenfranchisement they are not necessarily opposing.",
"The data presented here was collected to facilitate the development of Project Debater, and we chose the novel counter speech detection task to showcase this data and make it available to the community.",
"However, the unique properties of our data recorded speech which is more organized and carefully construed than everyday speech make it interesting to revisit well-known NLP and NLU tasks.",
"Several examples are listed below.",
"Author attribution: All speeches in the dataset are annotated for the debater who recorded them.",
"It could be particularly interesting to study author attribution on our dataset as it contains persuasive language, relevant to opinion writing and social media.",
"Additionally, we provide voice recordings and transcripts for all speeches, enabling to study multi-modal methods for this task.",
"Topic identification: This is a well established research area which can be examined here in various aspects, including clustering speeches by topic, matching speeches to topics or extracting the topic of a speech without prior knowledge.",
"Whereas previous work often requires annotating the topics of texts and deducing a consensual label, in our data the topic of a speech is given by design.",
"Sentence ordering or local coherence: The sentence ordering task (Barzilay and Lapata, 2005) is concerned with organizing text in a coherent way and is especially relevant for natural language generation.",
"Our dataset allows to study this using spoken natural language of a persuasive nature, that often relies on a careful development of an argumentative intent.",
"The data also provides a unique opportunity to study the interplay between a coherent arrangement of language and the associated prosodic cues.",
"Other tasks The large scale of the dataset, over 200 hours of spoken content and their manually-corrected transcripts, enables its use in other speech-processing tasks that require such data.",
"Some examples include speech-to-text, text-to-speech, and direct learning from speech of word (Chung and Glass, 2018) or sentence (Haque et al., 2019) embeddings.",
"Such tasks often use large scale datasets of read content (e.g. Panayotov et al. (2015)), and our data allows their exploration in the context of spoken spontenous speech.",
"In addition, with further annotations of the dataset, it lends itself to other potential tasks.",
"One example is the extraction of the main points of a speech or article.",
"This can facilitate various downstream tasks, such as single document summarization in the context of spoken language.",
"Another example is the annotation of named entities within the transcript texts, facilitating direct identification of those entities in the audio, similarly to the work of Ghannay et al. (2018).",
"We presented a novel NLU task of identifying a counter speech, which best counters an input speech, within a set of candidate counter speeches.",
"As previous studies have shown, and consistent with our own findings, obtaining data for such a task is difficult, especially considering that labeling at scale of full speeches is an arduous effort.",
"To facilitate research of this problem, we recast the proposed general task in a defined debate setup and construct a corresponding benchmark data.",
"We collected, and release as part of this work, more than 3,600 debate speeches annotated for the proposed task.",
"We presented baselines for the task, considering a variety of contemporary NLP models.",
"The experiments suggest that the best results are achieved using JensenShannon similarity, for speeches that contain explicit responses (accuracy of 80 %) and using conditional mutual-information on speeches that respond to the input speech in an implicit way (accuracy of 43 %).",
"We established the performance of humans on this task, showing that expert humans currently outperform automatic methods by a significant margin attaining an accuracy of 92% on speeches with an explicit true counter, and 76% on speeches with an implicit one.",
"Noteworthy is that some of the automatic methods outperform the results achieved by the crowd, suggesting that the task is difficult, and may require a level of expertise beyond layman-level.",
"The reported gap between the performance of expert humans and the results achieved by NLP models demonstrate room for further research.",
"Future research may focus on the motivation we described, but may also utilize the large speeches corpus we release as part of this work to a variety of additional different endeavors.",
"We wish to thank the many debaters and transcribers that took part in the effort of creating this dataset, and the anonymous reviewers for their insightful comments, suggestions, and feedback."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"objective",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"other",
"other",
"objective",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"other"
] |
[
"This paper proposes an approach for applying GANs to NMT.",
"We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator.",
"The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from human-translated ones.",
"The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium.",
"Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points.",
"During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator.",
"Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-of-the-art Transformer on English-German and Chinese-English translation tasks.",
"Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014) which directly leverages a single neural network to transform the source sentence into the target sentence, has drawn more and more attention in both academia and industry (Shen et al., 2015; Wu et al., 2016; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017).",
"This end-to-end NMT typically consists of two sub neural networks.",
"The encoder network reads and encodes the source sentence into the context vector representation; and the decoder network generates the target sentence word by word based on the context vector.",
"To dynamically generate a context vector for a target word being generated, the attention mechanism which enables the model to focus on the relevant words in the source-side sentence is usually deployed.",
"Under the encoder-decoder framework, many variants of the model structure, such as convolutional neural network (CNN) and recurrent neural network (RN-N) are proposed (Bahdanau et al., 2014; Gehring et al., 2017).",
"Recently, (Gehring et al., 2017) propose the Transformer, the first sequence transduction model based entirely on attention, achieving state-of-the-art performance on the English-German and English-French translation tasks.",
"Despite its success, the Transformer, similar to traditional NMT models, is still optimized to maximize the likelihood estimation of the ground word (M-LE) at each time step.",
"Such an objective poses a hidden danger to NMT models.",
"That is, the model may generate the best candidate word for the current time step yet a bad component of the whole sentence in the long run.",
"Minimum risk training (MRT) (Shen et al., 2015) is proposed to alleviate such a limitation by adopting the sequence level objective, i.e., the sentence-level BLEU, for traditional NMT models.",
"Yet somewhat improved, this objective still does not guarantee the translation results to be natural and sufficient.",
"Since the BLEU point is computed as the geometric mean of the modified n-gram precisions (Papineni et al., 2002), almost all of the existing objectives essentially train NMT models to generate sentences with n-gram precisions as high as possible (MLE can be viewed to generate sentences with high 1-gram precisions).",
"While n-gram precisions largely tell the good sentence apart from the bad one, it is widely acknowledged that higher n-gram precisions do not guarantee better sentences (Callison-Burch and Osborne, 2006; Chatterjee et al., 2007).",
"Additionally, the manually defined objective, i.e., 1346 the n-gram precision, is unable to cover all crucial aspects of the data distribution and NMT models may be trained to generate suboptimal sentences (Luc et al., 2016).",
"In this paper, to address the limitation mentioned above, we borrow the idea of generative adversarial training from computer vision (Goodfel-low et al., 2014; Denton et al., 2015) to directly train the NMT model generating sentences which are hard to be discriminated from human translations.",
"The motivation behind is that while we can not manually define the data distribution of golden sentences comprehensively, we are able to utilize a discriminative network to learn automatically what the golden sentences look like.",
"Following this motivation, we build a conditional sequence generative adversarial net where we jointly train two sub adversarial models: A generator generates the target-language sentence based on the input source-language sentence; And a discriminator, conditioned on the source-language sentence, predicts the probability of the target-language sentence being a human-generated one.",
"During the training process, the generator aims to fool the discriminator into believing that its output is a human-generated sentence, and the discriminator makes efforts not to be fooled by improving its ability to distinguish the machine-generated sentence from the human-generated one.",
"This kind of adversarial training achieves a win-win situation when the generator and discriminator reach a Nash Equilibrium (Zhao et al., 2016; Arora et al., 2017; Guimaraes et al., 2017).",
"Besides generating the desired distribution, we also want to directly guide the generator with a static and specific objective, such as generating sentences with high BLEU points.",
"To this end, the smoothed sentence-level BLEU (Nakov et al., 2012) is utilized as the reinforced objective for the generator.",
"During training, we employ both the dynamic discriminator and the static BLEU objective to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator.",
"In summary, we mainly make the following contributions: To the best of our knowledge, this work is among the first endeavors to introduce the generative adversarial training into NMT.",
"We directly train the NMT model to generate sentences which are hard to be discriminated from human translations.",
"The proposed model can be applied to any end-to-end NMT systems.",
"We conduct extensive experiments on English-German and Chinese-English translation tasks and we test two different NMT models, the traditional RNNSearch (Bah-danau et al., 2014) and the state-of-the-art Transformer.",
"Experimental results show that the proposed approach consistently achieves great success.",
"Last but not least, we propose the smoothed sentence-level BLEU as the static and specific objective for the generator which biases the generation towards achieving high BLEU points.",
"We show that the proposed approach is a weighted combination of the naive GAN and MRT.",
"The RNNSearch is the traditional NMT model which has been widely explored.",
"We follow the de facto standard implementation by (Bahdanau et al., 2014).",
"The encoder is a bidirectional gat-ed recurrent units that encodes the input sequence x = ( x 1 , . . . , x m ) and calculates the forward sequence of hidden states ( h 1 , . . . , h m ) , and a backward sequence of hidden states ( h 1 , . . . , h m ) .",
"The final annotation vector h j is calculated by concatenating h j and h j .",
"The decoder is a recurrent neural network that predicts a target sequence y = ( y 1 , . . . , y n ) .",
"Each word y i is predicted on a recurrent hidden state s i , the previously predicted word y i 1 and a context vector c i .",
"The c i is computed as a weighted sum of the encoded annotations h j .",
"The weight a ij of each annotation h j is computed by the attention mechanism, which models the alignment between y i and x j .",
"The Transformer, recently proposed by (Vaswani et al., 2017), achieves state-of-the-art results on both WMT2014 English-German and WMT2014 English-French translation tasks.",
"The encoder of Transformer is composed of a stack of six identical layers.",
"Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network.",
"The decoder is also composed of a stack of six identical layers.",
"In addition to the two sub-layers in each encoder layer, the decoder inserts a third 1347 sub-layer, which performs multi-head attention over the output of the encoder stack.",
"The Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers since it allows for significantly more parallelization.",
"2.2 Generative adversarial nets Generative adversarial network, has enjoyed great success in computer vision and has been widely applied to image generation (Zhu et al., 2017; Radford et al., 2015).",
"The conditional generative adversarial nets (Gauthier, 2014) apply an extension of generative adversarial network to a conditional setting, which enables the networks to condition on some arbitrary external data.",
"Some recent works have begun to apply the generative adversarial training into the NLP area: (Chen et al., 2016) apply the idea of generative adversarial training to sentiment analysis and (Zhang et al., 2017) use the idea to domain adaptation tasks.",
"For sequence generation problem, (Yu et al., 2016) leverage policy gradient reinforcement learning to back-propagate the reward from the discriminator, showing presentable results for poem generation, speech language generation and music generation.",
"Similarly, (Zhang et al., 2016) generate the text from random noise via adversarial training.",
"A striking difference from the works mentioned above is that, our work is in the conditional setting where the target-language sentence is generated conditioned on the source-language one.",
"In parallel to our work, (Li et al., 2017) propose a similar conditional sequence generative adversarial training for dialogue generation.",
"They use a hierarchical long-short term memory (LSTM) architecture for the discriminator.",
"In contrast to their approach, we apply the CNN-based discriminator for the machine translation task.",
"Furthermore, we propose to utilize the sentence-level BLEU as the specific objective for the generator.",
"Detailed training strategies for the proposed model and extensive quantitative results are reported.",
"We noticed that (Wu et al., 2017) is exploring the potential of GAN in NMT too.",
"There are some differences in training strategies and experimental settings between (Wu et al., 2017) and this work.",
"And the most significant difference is that we propose a novel BLEU-reinforced GAN for NMT 1 .",
"1 The previous presentation of this work can be found at https://arxiv.org/abs/1703.04887 3 The Approach 3.1 Model overview In this section, we describe the architecture of the proposed BLEU reinforced conditional sequence generative adversarial net (referred to as BR-CSGAN) in detail.",
"The sentence generation process is viewed as a sequence of actions that are taken according to a policy regulated by the generator.",
"In this work, we take the policy gradient training strategies following (Yu et al., 2016).",
"The whole architecture of the proposed model is depicted in figure 1.",
"The model mainly consists of three sub modules: Generator Based on the source-language sentences, the generator G aims to generate target-language sentences indistinguishable from human translations.",
"Discriminator The discriminator D , conditioned on the source-language sentences, tries to distinguish the machine-generated sentences from human translations.",
"D can be viewed as a dynamic objective since it is updated synchronously with G. BLEU objective The sentence-level BLEUQ serves as the reinforced objective, guiding the generation towards high BLEU points.",
"Q is a static function which will not be updated during training.",
"Resembling NMT models, the generator G defines the policy that generates the target sentence y given the source sentence x .",
"The generator takes exactly the same architecture with NMT models.",
"Note that we do not assume the specific architecture of the generator.",
"To verify the effectiveness of the proposed method, we take two different architectures for the generator, the RNNSearch 2 and Transformer 3 .",
"Recently, the deep discriminative models such as the CNN and RNN have shown a high performance in complicated sequence classification tasks.",
"Here, the discriminator is implemented based on the CNN architecture.",
"Since sentences generated by the generator have variable lengths, the CNN padding is used to transform the sentences to sequences with fixed length 2 https://github.com/nyu-dl/dl4mt-tutorial 3 https://github.com/tensorflow/tensor2tensor 1348 G x , g x y human , d x y D G Next action MC search Reward D with Q state Reward Reward Reward Figure 1: The Illustration of the proposed BR-CSGAN.",
"Given the source-language sequence x 1 , . . . , x T and target-language sequence y 1 , . . . , y T , we build the source matrix X 1: T and target matrix Y 1: T respectively as: X 1: T = x 1 ; x 2 ; . . . ; x T (1) and Y 1: T = y 1 ; y 2 ; . . . ; y T (2) where x t , y t R k is the k -dimensional word embedding and the semicolon is the concatenation operator.",
"For the source matrix X 1: T , a kernel w j R l k applies a convolutional operation to a window size of l words to produce a series of feature maps: c ji = ( BN ( w j X i : i + l 1 + b )) (3) where operator is the summation of element-wise production and b is a bias term.",
"is a nonlinear activation function which is implemented as ReLu in this paper.",
"Note that the batch normalization (Ioffe and Szegedy, 2015) which accelerates the training significantly, is applied to the input of the activation function ( BN in equation 3).",
"To get the final feature with respect to kernel w j , a max-over-time pooling operation is leveraged over the feature maps: e c j = max { c j 1 , . . . , c jT l +1 } (4) We use various numbers of kernels with different window sizes to extract different features, which are then concatenated to form the source-language sentence representation c x .",
"Identically, the target-language sentence representation c y can be extracted from the target matrix Y 1: T .",
"Finally, given the source-language sentence, the probability that the target-language sentence is being real can be computed as: p = ( V [ c x ; c y ]) (5) where V is the transform matrix which transforms the concatenation of c x and c y into a 2-dimension embedding and is the logistic function.",
"We apply the smoothed sentence-level BLEU as the specific objective for the generator.",
"Given the generated sentence y g and the the ground true sentence y d , the objective Q calculates a reward Q ( y g , y d ) , which measures the n-gram precisions of the generated sentence y g .",
"Identical to the output of the discriminator, the Q ( y g , y d ) also ranges from zero to one, which makes it easier to fuse Q and D .",
"Following (Yu et al., 2016), the objective of the generator G is defined as to generate a sequence from the start state to maximize its expected end reward.",
"Formally, the objective function is computed as: J ( ) = PY 1: TG ( Y 1: T | X ) RG D,Q ( Y 1: T 1 , X, y T , Y ) where represents the parameters in G , Y 1: T = y 1 , . . . , y T indicates the generated target sequence, X is the source-language sentence, Y represents the ground true target sentence.",
"RG D,Q is the action-value function of a target-language sentence given the source sentence X , i.e. the expected accumulative reward starting from the state ( Y 1: T 1 , X ) , taking action y T , and following the policy G .",
"To estimate the action-value function, we consider the estimated probability of being real by the discriminator D and the output of the BLEU objective Q as the reward: RG D,Q ( Y 1: T 1 , X, y T , Y ) = ( D ( X, Y 1: T ) b ( X, Y 1: T )) + (1 ) Q ( Y 1: T , Y ) where b(X,Y) denotes the baseline value to reduce the variance of the reward.",
"Practically, we take b(X,Y) as a constant, 0.5 for simplicity.",
"And the is a hyper-parameter.",
"The question is that, given the source sequence, D only provides a reward value for a finished target sequence.",
"If Y 1: T is not a finished target sequence, the value of D ( X, Y 1: T ) makes no sense.",
"Therefore, we cannot get the 1349 action-value for an intermediate state directly.",
"To evaluate the action-value for an intermediate state, the Monte Carlo search under the policy of G is applied to sample the unknown tokens.",
"Each search ends until the end of sentence token is sampled or the sampled sentence reaches the maximum length.",
"To obtain more stable reward and reduce the variance, we represent an N-time Monte Carlo search as: { Y 11: T 1 , . . . , YN 1: TN } = MCG (( Y 1: t , X ) , N ) where T i represents the length of the sentence sampled by the i 'th Monte Carlo search.",
"( Y 1: t , X ) = ( y 1 , . . . , y t , X ) is the current state and Y Nt +1: TN is sampled based on the policy G .",
"The discriminator provides N rewards for the sampled N sentences respectively.",
"The final reward for the intermediate state is calculated as the average of the N rewards.",
"Hence, for the target sentence with the length T , we compute the reward for y t in the sentence level as: RG D,Q ( Y 1: t 1 ,X,y T ,Y ) = 1 N NP n =1 ( D ( X,Y n 1: T n ) b ( X,Y n 1: T n )) + (1 ) Q ( Y 1: T n ,Y ) t < T ( D ( X,Y 1: t ) b ( X,Y 1: t )) + (1 ) Q ( Y 1: t ,Y ) t = T Using the discriminator as a reward function can further improve the generator iteratively by dynamically updating the discriminator.",
"Once we get more realistic generated sequences, we re-train the discriminator as: min E X,Y P data [log D ( X,Y )] E X,Y G [log(1 D ( X,Y ))] After updating the discriminator, we are ready to re-train the generator.",
"The gradient of the objective function J ( ) w.r.t the generator's parameter is calculated as: J ( ) = 1 T TP t =1 P y t RG D,Q ( Y 1: t 1 ,X,y T ,Y ) ( G ( y t | Y 1: t 1 ,X )) = 1 T TP t =1 E y t G [ RG D,Q ( Y 1: t 1 ,X,y T ,Y ) log p ( y t | Y 1: t 1 ,X )] 3.6 Training strategies GANs are widely criticized for its unstable training since the generator and discriminator need to be carefully synchronized.",
"To make this work easier to reproduce, this paper gives detailed strategies for training the proposed model.",
"Firstly, we use the maximum likelihood estimation to pre-train the generator on the parallel training set until the best translation performance is achieved.",
"Then, generate the machine-generated sentences by using the generator to decode the training data.",
"We simply use the greedy sampling method instead of the beam search method for decoding.",
"Next, pre-train the discriminator on the combination of the true parallel data and the machine-generated data until the classification accuracy achieves at .",
"Finally, we jointly train the generator and discriminator.",
"The generator is trained with the policy gradient training method.",
"However, in our practice, we find that updating the generator only with the simple policy gradient training leads to unstableness.",
"To alleviate this issue, we adopt the teacher forcing approach which is similar to (Lamb et al., 2016; Li et al., 2017).",
"We directly make the discriminator to automatically assign a reward of 1 to the golden target-language sentence and the generator uses this reward to update itself on the true parallel example.",
"We run the teacher forcing training for one time once the generator is updated by the policy gradient training.",
"After the generator gets updated, we use the new stronger generator to generate more realistic sentences, which are then used to train the discriminator.",
"Following (Arjovsky et al., 2017), we clamp the weights of the discriminator to a fixed box ( [ (cid:15) , (cid:15) ] ) after each gradient update.",
"We perform one optimization step for the discriminator for each step of the generator.",
"In our practice, we set as 0.82, as 5000, (cid:15) as 1.0 and the N for Monte Carlo search as 20.",
"We evaluate our BR-CSGAN on English-German and Chinese-English translation tasks and we test two different architectures for the generator, the traditional RNNSearch and the newly emerged state-of-the-art Transformer.",
"English-German: For English-German translation, we conduct our experiments on the publicly available corpora used extensively as benchmark for NMT systems, WMT'14 En-De.",
"This data set contains 4.5M sentence pairs 4 .",
"Sentences are encoded using byte-pair encoding (Sennrich et al., 2015), which has a shared source-target vocabulary of about 37000 tokens.",
"We report results on newstest2014.",
"The newstest2013 is used as validation.",
"Chinese-English: For Chinese-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 5 .",
"Both the source and target sentences are encoded with byte-pair encoding and the tokens in the source and target vocabulary is about 38000 and 34000 respectively 6 .",
"We choose the NIST02 as the development set.",
"For testing, we use NIST03, NIST04 and NIST05 data sets.",
"To speed up the training procedure, sentences of length over 50 words are removed when we conduct experiments on the RNNSearch model.",
"This is widely used by previous works (Ranzato et al., 2015; Shen et al., 2015; Yang et al., 2016).",
"For the Transformer, following the base model in (Vaswani et al., 2017), we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8.",
"The encoder and decoder both have a stack of 6 layers.",
"We use beam search with a beam size of 4 and length penalty = 0 .",
"6 .",
"For the RNNSearch, following (Bahdanau et al., 2014), We set the hidden units for both encoders and decoders as 512.",
"The dimension of the word embedding is also set as 512.",
"We do not apply dropout for training the RNNSearch.",
"During testing, we use beam search with a beam size of 10 and length penalty is not applied.",
"5 LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T10",
"6 When doing BPE for Chinese, we need to do word segmentation first and the following steps are the same with BPE for English.",
"single machine 7 .",
"We stop training when the model achieves no improvement for the tenth evaluation on the development set.",
"BLEU (Papineni et al., 2002) is utilized as the evaluation metric.",
"We apply the script mteval-v11b.pl to evaluate the Chinese-English translation and utilize the script multi-belu.pl for English-German translation 8 .",
"The model of RNNSearch is optimized with the mini-batch of 64 examples.",
"It takes about 30 hours to pre-train the RNNSearch on the Chinese-English data set and 46 hours on the English-German data set.",
"During generative adversarial training, it takes about 35 hours on the Chinese-English data set and about 50 hours on the English-German data set.",
"For the Transformer, each training batch contains a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens.",
"On the Chinese-English data set, it takes about 15 hours to do pretraining and 20 hours to do generative adversarial training.",
"On the English-German data set, it takes about 35 hours for the pre-training and 40 hours for the generative adversarial training.",
"Table 1 shows the BLEU score on Chinese-English and English-German test sets.",
"On the RNNSearch model, the naive GAN (i.e., the line of RNNSearch+BR-CSGAN ( =1) in table",
"1) achieves improvement up to +1.11 BLEU points averagely on Chinese-English test sets and +0.9 BLEU points on English-German test set.",
"Armed 7 The code we used to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/NMT GAN 8 https://github.com/moses-smt/mosesdecoder/blob/617e8c8/scripts/generic/multi-bleu.perl;mteval-v11b.pl 1351 with the BLEU objective, the BR-CSGAN (the line of RNNSearch+BR-CSGAN ( =0.7)) leads to more significant improvements, +1.83 BLEU points averagely on Chinese-English translation and +1.69 BLEU points on English-German translation.",
"We also test the translation performance when the RNNSearch is only guided by the static BLEU objective (the line of RNNSearch+BR-CSGAN ( =0)), and we only get +0.58 BLEU points improvement on Chinese-English translation and +0.55 BLEU points improvement on English-German.",
"Experiments on the Transformer show the same trends.",
"While the Transformer has achieved state-of-the-art translation performances, our approach still achieves +0.81 BLEU points improvement on Chinese-English translation and +0.62 BLEU points improvement on English-German.",
"These results indicate that the proposed BR-CSGAN consistently outperforms the baselines and it shows better translation performance than the naive GAN and the model guided only by the BLEU objective.",
"We show that MRT (Shen et al., 2015) is an extreme case of our approach.",
"Considering a sentence pair ( x, y ) , the training objective of MRT is calculated as b J ( 0 ) = X y s S ( x ) p ( y s | x ; 0 )( y s , y ) where ( y s , y ) is a loss function (i.e., the sentence-level BLEU used in this paper) that measures the discrepancy between a predicted translation y s and the training example y , S ( x ) represents the set which contains all of the predictions given the input x , and 0 is the parameters of the NMT model.",
"Unfortunately, this objective is usually intractable due to the exponential search space.",
"To alleviate this problem, a subset of the search space is sampled to approximate this objective.",
"In this paper, when we set as zero, the objective for the proposed BR-CSGAN comes to J ( ) =0 = XY 1: TG ( Y 1: T | X ) Q ( Y 1: T , Y ) where the Q ( Y 1: T , Y ) is also a loss function between the predicted translation Y 1: T and the training example Y .",
"It is easy to be found that, under this condition (i.e., set as zero), the proposed BR-CSGAN optimizes almost the same objective with MRT.",
"The only difference is that the reinforcement learning procedure is utilized in BR-CSGAN to maximize the total reward and MRT instead applies random sampling to approximate the risk.",
"Actually, the BR-CSGAN is a weighted sum of the naive GAN ( =1) and MRT ( =0), and it incorporates the advantages of the two approaches.",
"Specifically, compared to naive GAN which is trained without specific objective guidance, BR-CSGAN utilizes the BLEU objective to guide the generator to generate sentences with higher BLEU points.",
"And compared to MRT which is trained only with the static objective, the BR-CSGAN applies a dynamic discriminator which updates synchronously with the generator, to feedback the dynamic rewards for the generator.",
"Table 2 compares the translation performance between the MRT and BR-CSGAN on Chinese-English and English-German translation tasks.",
"We only conduct experiments on the RNNSearch because we only get the open-source implementation of MRT on the RNNSearch 9 .",
"Results show that the proposed BR-CSGAN consistently outperforms the MRT on the Chinese-English and English-German translations.",
"The initial accuracy of the discriminator which is viewed as a hyper-parameter, can be set carefully during the process of pre-training.",
"A natural question is that when shall we end the pretraining.",
"Do we need to pre-train the discriminator with the highest accuracy?",
"To answer this question, we test the impact of the initial accuracy of the discriminator.",
"We pre-train five discriminators which have the accuracy as 0.6, 0.7, 0.8, 0.9 and 0.95 respectively.",
"With the five discriminators, we train five different BR-CSGAN models (with the generator as RNNSearch and set as 0.7) and test 9 The open-source implementation can be found at: http-s://github.com/EdinburghNLP/nematus 1352 0 2 4 6 8 10 12 14 16 18 Test step 5 10 15 20 25 30 35 40 BLEU 0.6-acc 0.7-acc 0.8-acc 0.9-acc 0.95-acc Figure 2: BLEU score on the development set for the BR-CSGAN where the discriminators have different initial accuracy.",
"their translation performances on the development set at regular intervals.",
"Figure 2 reports the results and we can find that the initial accuracy of the discriminator shows great impacts on the translation performance of the proposed BR-CSGAN.",
"From figure 2, we show that the initial accuracy of the discriminator needs to be set carefully and either it is too high (0.9 and 0.95) or too low (0.6 and 0.7), the model performs badly 10 .",
"This suggests that it is important for the generator and discriminator to keep a balanced relationship at the beginning of the generative adversarial training.",
"If the discriminator is too strong, the generator is always penalized for its bad predictions and gets no idea about right predictions.",
"Hence, the generator is discouraged all the time and the performance gets worse and worse.",
"On the other hand, if the discriminator is too weak, the discriminator is unable to give right guidance for the generator, i.e. the gradient direction for updating the generator is random.",
"Empirically, we pre-train the discriminator until its accuracy reaches around 0.8.",
"We are also curious about how the sample times N for Monte Carlo search affects the translation performance.",
"Intuitively, if N is set as a smal-l number, the intermediate reward for each word may be incorrect since there is a large variance for the Monto Carol search when the sample time is too small.",
"And if otherwise, the computation 10 To make the illustration simple and clear, we only depict the results when the RNNSearch acts as the generator.",
"shall be very time consuming because we need to do much more sampling.",
"Therefore, there is a trade-off between the accuracy and computation complexity here.",
"We investigate this problem on the Chinese-English translation task.",
"Table 3 presents the translation performance of the BR-CSGAN on the test sets when the N are set from 5 to 30 with interval 5.",
"From table 3, the proposed model achieves no improvement than the baseline (i.e., the pre-trained generator) when N are set less than 15 and the BLEU scores are not reported on the table.",
"As a matter of fact, the translation performance of the model gets worse and worse.",
"We conjecture that the approximated reward is far from the expected reward due to the large variance when N is set too small, and gives wrong gradient directions for model updating.",
"Since the training for GAN is not stable, the wrong gradient direction exacerbates the unstableness and results in the BLEU getting worse and worse.",
"With the increasing of N , the translation performance of the model gets improved.",
"However, with N set larger than 20, we get little improvement than the model with N set as 20 and the training time exceeds our expectation.",
"In this work, we propose the BR-CSGAN which leverages the BLEU reinforced generative adversarial net to improve the NMT.",
"We show that the proposed approach is a weighted combination of the naive GAN and MRT.",
"To verify the effectiveness of our approach, we test two different architectures for the generator, the traditional RNNSearch and the state-of-the-art Transformer.",
"Extensive experiments on Chinese-English and English-German translation tasks show that our approach consistently achieves significant improvements.",
"In the future, we would like to try multi-adversarial framework which consists of multi discriminators and generators for GAN.",
"This work is supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002102, and Beijing Engineering Research Center under Grant No.",
"Z171100002217015.",
"We would like to thank Xu Shuang for her preparing data used in this work.",
"Additionally, we also want to thank Chen Zhineng, Wang Wenfu and Zhao Yuanyuan for their invaluable discussions on this work."
] | [
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"method",
"objective",
"method",
"objective",
"objective",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"objective",
"objective",
"other",
"abstain",
"abstain",
"objective",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Although BERT and its variants have reshaped the NLP landscape, it still remains unclear how best to derive sentence embeddings from such pre-trained Transformers.",
"In this work, we propose a contrastive learning method that utilizes self-guidance for improving the quality of BERT sentence representations.",
"Our method fine-tunes BERT in a self-supervised fashion, does not rely on data augmentation, and enables the usual [CLS] token embeddings to function as sentence vectors.",
"Moreover, we redesign the contrastive learning objective (NT-Xent) and apply it to sentence representation learning.",
"We demonstrate with extensive experiments that our approach is more effective than competitive baselines on diverse sentence-related tasks.",
"We also show it is efficient at inference and robust to domain shifts.",
"Pre-trained Transformer (Vaswani et al., 2017) language models such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have been integral to achieving recent improvements in natural language understanding.",
"However, it is not straightforward to directly utilize these models for sentence-level tasks, as they are basically pre-trained to focus on predicting (sub)word tokens given context.",
"The most typical way of converting the models into sentence encoders is to fine-tune them with supervision from a downstream task.",
"In the process, as initially proposed by Devlin et al. (2019), a pre-defined to-ken's (a.k.a. [CLS] ) embedding from the last layer of the encoder is deemed as the representation of an input sequence.",
"This simple but effective method is possible because, during supervised fine-tuning, the [CLS] embedding functions as the only communication gate between the pre-trained encoder * This work has been mainly conducted when TK was a research intern at NAVER AI Lab.",
"and a task-specific layer, encouraging the [CLS] vector to capture the holistic information.",
"On the other hand, in cases where labeled datasets are unavailable, it is unclear what the best strategy is for deriving sentence embeddings from BERT.",
"1 In practice, previous studies (Reimers and Gurevych, 2019; Li et al., 2020; Hu et al., 2020) reported that navely (i.e., without any processing) leveraging the [CLS] embedding as a sentence representation, as is the case of supervised fine-tuning, results in disappointing outcomes.",
"Currently, the most common rule of thumb for building BERT sentence embeddings without supervision is to apply mean pooling on the last layer(s) of BERT.",
"Yet, this approach can be still sub-optimal.",
"In a preliminary experiment, we constructed sentence embeddings by employing various combinations of different BERT layers and pooling methods, and tested them on the Semantic Textual Similarity (STS) benchmark dataset (Cer et al., 2017).",
"2 We discovered that BERT(-base)'s performance, measured in Spearman correlation ( 100), can range from as low as 16.71 ( [CLS] , the 10 th layer) to 63.19 (max pooling, the 2 nd layer) depending on the selected layer and pooling method (see Figure 1).",
"This result suggests that the current practice of building BERT sentence vectors is not solid enough, and that there is room to bring out more of BERT's expressiveness.",
"In this work, we propose a contrastive learning method that makes use of a newly proposed self-guidance mechanism to tackle the aforementioned problem.",
"The core idea is to recycle intermediate BERT hidden representations as positive samples to which the final sentence embedding should be close.",
"As our method does not require data augmentation, which is essential in most recent contrastive learning frameworks, it is much simpler and easier to use than existing methods (Fang and Xie, 2020; Xie et al., 2020).",
"Moreover, we customize the NT-Xent loss (Chen et al., 2020), a contrastive learning objective widely used in computer vision, for better sentence representation learning with BERT.",
"We demonstrate that our approach outperforms competitive baselines designed for building BERT sentence vectors (Li et al., 2020; Wang and Kuo, 2020) in various environments.",
"With comprehensive analyses, we also show that our method is more computationally efficient than the baselines at inference in addition to being more robust to domain shifts.",
"Contrastive Representation Learning.",
"Contrastive learning has been long considered as effective in constructing meaningful representations.",
"For instance, Mikolov et al. (2013) propose to learn word embeddings by framing words nearby a target word as positive samples while others as negative.",
"Logeswaran and Lee (2018) generalize the approach of Mikolov et al. (2013) for sentence representation learning.",
"More recently, several studies (Fang and Xie, 2020; Giorgi et al., 2020; Wu et al., 2020) suggest to utilize contrastive learning 2 In the experiment, we employ the settings identical with ones used in Chapter 4.",
"for training Transformer models, similar to our approach.",
"However, they generally require data augmentation techniques, e.g., back-translation (Sen-nrich et al., 2016), or prior knowledge on training data such as order information, while our method does not.",
"Furthermore, we focus on revising BERT for computing better sentence embeddings rather than training a language model from scratch.",
"On the other hand, contrastive learning has been also receiving much attention from the computer vision community (Chen et al. (2020); Chen and He (2020); He et al. (2020), inter alia ).",
"We improve the framework of Chen et al. (2020) by optimizing its learning objective for pre-trained Transformer-based sentence representation learning.",
"For extensive surveys on contrastive learning, refer to Le-Khac et al. (2020) and Jaiswal et al. (2020).",
"Fine-tuning BERT with Supervision.",
"It is not always trivial to fine-tune pre-trained Transformer models of gigantic size with success, especially when the number of target domain data is limited (Mosbach et al., 2020).",
"To mitigate this training instability problem, several approaches (Aghajanyan et al., 2020; Jiang et al., 2020; Zhu et al., 2020) have been recently proposed.",
"In particular, Gunel et al. (2021) propose to exploit contrastive learning as an auxiliary training objective during fine-tuning BERT with supervision from target tasks.",
"In contrast, we deal with the problem of adjusting BERT when such supervision is not available.",
"Sentence Embeddings from BERT.",
"Since BERT and its variants are originally designed to be fine-tuned on each downstream task to attain their optimal performance, it remains ambiguous how best to extract general sentence representations from them, which are broadly applicable across diverse sentence-related tasks.",
"Following Conneau et al. (2017), Reimers and Gurevych (2019) (SBERT) propose to compute sentence embeddings by conducting mean pooling on the last layer of BERT and then fine-tuning the pooled vectors on the natural language inference (NLI) datasets (Bow-man et al., 2015; Williams et al., 2018).",
"Meanwhile, some other studies concentrate on more effectively leveraging the knowledge embedded in BERT to construct sentence embeddings without supervision.",
"Specifically, Wang and Kuo (2020) propose a pooling method based on linear algebraic algorithms to draw sentence vectors from BERT's intermediate layers.",
"Li et al. (2020) suggest to learn a mapping from the average of the embeddings obtained from the last two layers of BERT to a spherical Gaussian distribution using a flow model, and to leverage the redistributed embeddings in place of the original BERT representations.",
"We follow the setting of Li et al. (2020) in that we only utilize plain text during training, however, unlike all the others that rely on a certain pooling method even after training, we directly refine BERT so that the typical [CLS] vector can function as a sentence embedding.",
"Note also that there exists concurrent work (Carlsson et al., 2021; Gao et al., 2021; Wang et al., 2021) whose motivation is analogous to ours, attempting to improve BERT sentence embeddings in an unsupervised fashion.",
"As BERT mostly requires some type of adaptation to be properly applied to a task of interest, it might not be desirable to derive sentence embeddings directly from BERT without fine-tuning.",
"While Reimers and Gurevych (2019) attempt to alleviate this problem with typical supervised fine-tuning, we restrict ourselves to revising BERT in an unsupervised manner, meaning that our method only demands a bunch of raw sentences for training.",
"Among possible unsupervised learning strategies, we concentrate on contrastive learning which can inherently motivate BERT to be aware of similarities between different sentence embeddings.",
"Considering that sentence vectors are widely used in computing the similarity of two sentences, the inductive bias introduced by contrastive learning can be helpful for BERT to work well on such tasks.",
"The problem is that sentence-level contrastive learning usually requires data augmentation (Fang and Xie, 2020) or prior knowledge on training data, e.g., order information (Logeswaran and Lee, 2018), to make plausible positive/negative samples.",
"We attempt to circumvent these constraints by utilizing the hidden representations of BERT, which are readily accessible, as samples in the embedding space.",
"We aim at developing a contrastive learning method that is free from external procedure such as data augmentation.",
"A possible solution is to leverage (virtual) adversarial training (Miyato et al., 2018) in the embedding space.",
"However, there is no assurance that the semantics of a sentence embedding would remain unchanged when it is added with a Copy Sampler Projection Head (initialize) ...",
"random noise.",
"As an alternative, we propose to utilize the hidden representations from BERT's intermediate layers, which are conceptually guaranteed to represent corresponding sentences, as pivots that BERT sentence vectors should be close to or be away from.",
"We call our method as self-guided contrastive learning since we exploit internal training signals made by BERT itself to fine-tune it.",
"We describe our training framework in Figure 2.",
"First, we clone BERT into two copies, BERTF ( fixed ) and BERTT ( tuned ) respectively.",
"BERTF is fixed during training to provide a training signal while BERTT is fine-tuned to construct better sentence embeddings.",
"The reason why we differentiate BERTF from BERTT is that we want to prevent the training signal computed by BERTF from being degenerated as the training procedure continues, which often happens when BERTF = BERTT .",
"This design decision also reflects our philosophy that our goal is to dynamically conflate the knowledge stored in BERT's different layers to produce sentence embeddings, rather than introducing new information via extra training.",
"Note that in our setting, the [CLS] vector from the last layer of BERTT , i.e., c i , is regarded as the final sentence embedding we aim to optimize/utilize during/after fine-tuning.",
"Second, given b sentences in a mini-batch, say s 1 , s 2 , , s b , we feed each sentence s i into BERTF and compute token-level hidden representations H i,k R len ( s i ) d : [ H i, 0 ; H i, 1 ; ; H i,k ; ; H i,l ] = BERTF ( s i ) , where 0 k l (0: the non-contextualized layer), l is the number of hidden layers in BERT, len ( s i ) is the length of the tokenized sentence, and d is the size of BERT's hidden representations.",
"Then, we apply a pooling function p to H i,k for deriving diverse sentence-level views h i,k R d from all layers, i.e., h i,k = p ( H i,k ) .",
"Finally, we choose the final view to be utilized by applying a sampling function : h i = ( { h i,k | 0 k l } ) .",
"As we have no specific constraints in defining p and , we employ max pooling as p and a uniform sampler as for simplicity, unless otherwise stated.",
"This simple choice for the sampler implies that each h i,k has the same importance, which is persuasive considering it is known that different BERT layers are specialized at capturing disparate linguistic concepts (Jawahar et al., 2019).",
"3 Third, we compute our sentence embedding c i for s i as follows: c i = BERTT ( s i ) [CLS] , where BERT ( ) [CLS] corresponds to the [CLS] vector obtained from the last layer of BERT.",
"Next, we collect the set of the computed vectors into X = { x | x { c i } { h i }} , and for all x m X, we compute the NT-Xent loss (Chen et al., 2020): L basem = log ( ( x m , ( x m )) /Z ) , where ( u , v ) = exp ( g ( f ( u ) , f ( v )) / ) and Z = (cid:80) 2 bn =1 ,n (cid:54) = m ( x m , x n ) .",
"Note that is a temperature hyperparameter, f is a projection head consisting of MLP layers, 4 g ( u , v ) = u v / (cid:107) u (cid:107)(cid:107) v (cid:107) is the cosine similarity function, and ( ) is the matching function defined as follows, ( x ) = (cid:40) h i if x is equal to c i .",
"c i if x is equal to h i .",
"Lastly, we sum all L basem divided by 2 b , and add a regularizer L reg = (cid:107) BERTF BERTT (cid:107) 22 to prevent BERTT from being too distant from BERTF .",
"5 3 We can also potentially make use of another sampler functions to inject our bias or prior knowledge on target tasks.",
"4 We employ a two-layered MLP whose hidden size is 4096.",
"Each linear layer in the MLP is followed by a GELU function.",
"5 To be specific, L reg is the square of the L2 norm of the difference between BERTF and BERTT .",
"As shown in Figure 2, we also freeze the 0 th layer of BERTT for stable learning.",
"To summarize, our method refines BERT so that the sentence embedding c i has a higher similarity with h i , which is another representation for the sentence s i , in the subspace projected by f while being relatively dissimilar with c j,j (cid:54) = i and h j,j (cid:54) = i .",
"After training is completed, we remove all the components except BERTT and simply use c i as the final sentence representation.",
"In Section 3.1, we relied on a simple variation of the general NT-Xent loss, which is composed of four factors.",
"Given sentence s i and s j without loss of generality, the factors are as follows (Figure 3): (1) c i h i (or c j h j ): The main component that mirrors our core motivation that a BERT sentence vector ( c i ) should be consistent with intermediate views ( h i ) from BERT.",
"(2) c i c j : A factor that forces sentence embeddings ( c i , c j ) to be distant from each other.",
"(3) c i h j (or c j h i ): An element that makes c i being inconsistent with views for other sentences ( h j ).",
"(4) h i h j : A factor that causes a discrepancy between views of different sentences ( h i , h j ).",
"Even though all the four factors play a certain role, some components may be useless or even cause a negative influence on our goal.",
"For instance, Chen and He (2020) have recently reported that in image representation learning, only (1) is vital while others are nonessential.",
"Likewise, we customize the training loss with three major modifications so that it can be more well-suited for our purpose.",
"First, as our aim is to improve c i with the aid of h i , we re-define our loss focusing more on c i rather than considering c i and h i as equivalent entities: L opt 1 i = log ( ( c i , h i ) / Z ) , where Z = (cid:80) bj =1 ,j (cid:54) = i ( c i , c j ) + (cid:80) bj =1 ( c i , h j ) .",
"In other words, h i only functions as points that c i is encouraged to be close to or away from, and is not deemed as targets to be optimized.",
"This revision naturally results in removing (4).",
"Furthermore, we discover that (2) is also insignificant for improving performance, and thus derive L opt 2 i : L opt 2 i = log( ( c i , h i ) / (cid:80) bj =1 ( c i , h j )) .",
"Lastly, we diversify signals from (1) and (3) by allowing multiple views { h i,k } to guide c i : L opt 3 i,k = log ( c i , h i,k ) ( c i , h i,k )+ (cid:80) b m =1 ,m (cid:54) = i (cid:80) l n =0 ( c i , h m,n ) .",
"We expect with this refinement that the learning objective can provide more precise and fruitful training signals by considering additional (and freely available) samples being provided with.",
"The final form of our optimized loss is: L opt = 1 b ( l + 1) b (cid:88) i =1 l (cid:88) k =0 L opt 3 i,k + L reg .",
"In Section 5.1, we show the decisions made in this section contribute to improvements in performance.",
"In terms of pre-trained encoders, we leverage BERT (Devlin et al., 2019) for English datasets and MBERT, which is a multilingual variant of BERT, for multilingual datasets.",
"We also employ RoBERTa (Liu et al., 2019) and SBERT (Reimers and Gurevych, 2019) in some cases to evaluate the generalizability of tested methods.",
"We use the suffixes -base' and -large' to distinguish small and large models.",
"Every trainable model's performance is reported as the average of 8 separate runs to reduce randomness.",
"Hyperparameters are optimized on the STS-B validation set using BERT-base and utilized across different models.",
"See Table 8 in Appendix A.1 for details.",
"Our implementation is based on the HuggingFace's Transformers (Wolf et al., 2019) and SBERT (Reimers and Gurevych, 2019) library, and publicly available at https://github.com/galsang/SG-BERT .",
"We first evaluate our method and baselines on Semantic Textual Similarity (STS) tasks.",
"Given two sentences, we derive their similarity score by computing the cosine similarity of their embeddings.",
"Datasets and Metrics.",
"Following the literature, we evaluate models on 7 datasets in total, that is, STS-B (Cer et al., 2017), SICK-R (Marelli et al., 2014), and STS12-16 (Agirre et al., 2012, 2013, 2014, 2015, 2016).",
"These datasets contain pairs of two sentences, whose similarity scores are labeled from 0 to 5.",
"The relevance between gold annotations and the scores predicted by sentence vectors is measured in Spearman correlation ( 100).",
"Baselines and Model Specification.",
"We first prepare two non-BERT approaches as baselines, i.e., Glove (Pennington et al., 2014) mean embeddings and Universal Sentence Encoder (USE; Cer et al. (2018)).",
"In addition, various methods for BERT sentence embeddings that do not require supervision are also introduced as baselines: CLS token embedding: It regards the [CLS] vector from the last layer of BERT as a sentence representation.",
"Mean pooling: This method conducts mean pooling on the last layer of BERT and use the output as a sentence embedding.",
"WK pooling: This follows the method of Wang and Kuo (2020), which exploits QR decomposition and extra techniques to derive meaningful sentence vectors from BERT.",
"Flow : This is BERT-flow proposed by Li et al. (2020), which is a flow-based model that maps the vectors made by taking mean pooling on the last two layers of BERT to a Gaussian space.",
"6 Contrastive (BT) : Following Fang and Xie (2020), we revise BERT with contrastive learning.",
"However, this method relies on back-translation to obtain positive samples, unlike ours.",
"Details about this baseline are specified in Appendix A.2.",
"We make use of plain sentences from STS-B to fine-tune BERT using our approach, identical with Flow.",
"7 We name the BERT instances trained with our self-guided method as Contrastive (SG) and 6 We restrictively utilize this model, as we find it difficult to exactly reproduce the model's result with its official code.",
"7 For training, Li et al. (2020) utilize the concatenation of the STS-B training, validation, and test set ( without gold anno-tations).",
"We also follow the same setting for a fair comparison.",
"Results.",
"We report the performance of different approaches on STS tasks in Table 1 and Table 11 (Appendix A.6).",
"From the results, we confirm the fact that our methods (SG and SG-OPT) mostly outperform other baselines in a variety of experimental settings.",
"As reported in earlier studies, the nave [CLS] embedding and mean pooling are turned out to be inferior to sophisticated methods.",
"To our surprise, WK pooling's performance is even lower than that of mean pooling in most cases, and the only exception is when WK pooling is applied to SBERT-base.",
"Flow shows its strength outperforming the simple strategies.",
"Nevertheless, its performance is shown to be worse than that of our methods (although some exceptions exist in the case of SBERT-large).",
"Note that contrastive learning becomes much more competitive when it is combined with our self-guidance algorithm rather than back-translation.",
"It is also worth mentioning Models Spanish Baseline (Agirre et al., 2014) UMCC-DLSI-run2 (Rank #1) 80.69 MBERT + CLS 12.60 + Mean pooling 81.14 + WK pooling 79.78 + Contrastive (BT) 78.04 + Contrastive (SG) 82.09 + Contrastive (SG-OPT) 82.74 Table 2: SemEval-2014 Task 10 Spanish task.",
"that the optimized version of our method (SG-OPT) generally shows better performance than the basic one (SG), proving the efficacy of learning objective optimization (Section 3.2).",
"To conclude, we demonstrate that our self-guided contrastive learning is effective in improving the quality of BERT sentence embeddings when tested on STS tasks.",
"We expand our experiments to multilingual settings by utilizing MBERT and cross-lingual zero-shot transfer.",
"Specifically, we refine MBERT using only Models Arabic Spanish English (Track",
"English data and test it on datasets written in other languages.",
"As in Section 4.2, we use the English STS-B for training.",
"We consider two datasets for evaluation: (1) SemEval-2014 Task 10 (Spanish; Agirre et al. (2014)) and (2) SemEval-2017 Task 1 (Arabic, Spanish, and English; Cer et al. (2017)).",
"Performance is measured in Pearson correlation ( 100) for a fair comparison with previous work.",
"From Table 2, we see that MBERT with mean pooling already outperforms the best system (at the time of the competition was held) on SemEval-2014 and that our method further boosts the model's performance.",
"In contrast, in the case of SemEval-2017 (Table 3), MBERT with mean pooling even fails to beat the strong Cosine baseline.",
"8 However, MBERT becomes capable of outperforming (in English/Spanish) or being comparable with (Arabic) the baseline by adopting our algorithm.",
"We observe that while cross-lingual transfer using MBERT looks promising for the languages analogous to English (e.g., Spanish), its effectiveness may shrink on distant languages (e.g., Arabic).",
"Compared against the best system which is trained on task-specific data, MBERT shows reasonable performance considering that it is never exposed to any labeled STS datasets.",
"In summary, we demonstrate that MBERT fine-tuned with our method has a potential to be used as a simple but effective tool for multilingual (especially European) STS tasks.",
"We also evaluate BERT sentence vectors using the SentEval (Conneau and Kiela, 2018) toolkit.",
"Given sentence embeddings, SentEval trains linear classi-fiers on top of them and estimates the quality of the vectors via their performance (accuracy) on down-8 The Cosine baseline computes its score as the cosine similarity of binary sentence vectors with each dimension representing whether an individual word appears in a sentence.",
"stream tasks.",
"Among available tasks, we employ 7: MR, CR, SUBJ, MPQA, SST2, TREC, MRPC.",
"9 In Table 4, we compare our method (SG-OPT) with two baselines.",
"10 We find that our method is helpful over usual mean pooling in improving the performance of BERT-like models on SentEval.",
"SG-OPT also outperforms WK pooling on BERT-base/large while being comparable on SBERT-base.",
"From the results, we conjecture that self-guided contrastive learning and SBERT training suggest a similar inductive bias in a sense, as the benefit we earn by revising SBERT with our method is relatively lower than the gain we obtain when fine-tuning BERT.",
"Meanwhile, it seems that WK pooling provides an orthogonal contribution that is effective in the focused case, i.e., SBERT-base.",
"In addition, we examine how our algorithm impacts on supervised fine-tuning of BERT, although it is not the main concern of this work.",
"Briefly reporting, we identify that the original BERT(-base) and one tuned with SG-OPT show comparable performance on the GLUE (Wang et al., 2019) validation set, implying that our method does not influence much on BERT's supervised fine-tuning.",
"We refer readers to Appendix A.4 for more details.",
"We here further investigate the working mechanism of our method with supplementary experiments.",
"All the experiments conducted in this section follow the configurations stipulated in Section 4.1 and 4.2.",
"9 Refer to Conneau and Kiela (2018) for each task's spec.",
"10 We focus on reporting our own results as we discovered that the toolkit's outcomes can be fluctuating depending on its configuration (we list our settings in Appendix A.3).",
"We also restrict ourselves to evaluating SG-OPT for simplicity, as SG-OPT consistently showed better performance than other contrastive methods in previous experiments.",
"We conduct an ablation study to justify the decisions made in optimizing our algorithm.",
"To this end, we evaluate each possible variant on the test sets of STS tasks.",
"From Table 5, we confirm that all our modifications to the NT-Xent loss contribute to improvements in performance.",
"Moreover, we show that correct choices for hyperparameters are important for achieving the optimal performance, and that the projection head ( f ) plays a significant role as in Chen et al. (2020).",
"Although our method in principle can accept any sentences in training, its performance might be varied with the training data it employs (especially depending on whether the training and test data share the same domain).",
"To explore this issue, we apply SG-OPT on BERT-base by leveraging the mix of NLI datasets (Bowman et al., 2015; Williams et al., 2018) instead of STS-B, and observe the difference.",
"From Figure 4, we confirm the fact Layer Elapsed Time Training (sec.) Inference (sec.) BERT-base + Mean pooling -13.94 + WK pooling -197.03 ( 3.3 min.) + Flow 155.37 ( 2.6 min.) 28.49 + Contrastive (SG-OPT) 455.02 ( 7.5 min.) 10.51 Table 6: Computational efficiency tested on STS-B.",
"that no matter which test set is utilized (STS-B or all the seven STS tasks), our method clearly outperforms Flow in every case, showing its relative robustness to domain shifts.",
"SG-OPT only loses 1.83 (on the STS-B test set) and 1.63 (on average when applied to all the STS tasks) points respectively when trained with NLI rather than STS-B, while Flow suffers from the considerable losses of 12.16 and 4.19 for each case.",
"Note, however, that follow-up experiments in more diverse conditions might be desired as future work, as the NLI dataset inherently shares some similarities with STS tasks.",
"In this part, we compare the computational efficiency of our method to that of other baselines.",
"For each algorithm, we measure the time elapsed during training (if required) and inference when tested on STS-B.",
"All methods are run on the same machine (an Intel Xeon CPU E5-2620 v4 @ 2.10GHz and a Titan Xp GPU) using batch size 16.",
"The experimental results specified in Table 6 show that although our method demands a moderate amount of time ( < 8 min.) for training, it is the most efficient at inference, since our method is free from any post-processing such as pooling once training is completed.",
"We visualize a few variants of BERT sentence representations to grasp an intuition on why our method is effective in improving performance.",
"Specifically, we sample 20 positive pairs (red, whose similarity scores are 5) and 20 negative pairs (blue, whose scores are",
"0) from the STS-B validation set.",
"Then we compute their vectors and draw them on the 2D space with the aid of t-SNE.",
"In Figure 5, we confirm that our SG-OPT encourages BERT sentence embeddings to be more well-aligned with their positive pairs while still being relatively far from their negative pairs.",
"We also visualize embeddings from SBERT (Figure 6 in Appendix A.5), and identify that our approach and the supervised fine-tuning Models Pooling STS-B SICK-R STS12 STS13 STS14 STS15 STS16 Avg.",
"used in SBERT provide a similar effect, making the resulting embeddings more suitable for calculating correct similarities between them.",
"In this section, we discuss a few weaknesses of our method in its current form and look into some possible avenues for future work.",
"First, while defining the proposed method in Section 3, we have made decisions on some parts without much consideration about their optimality, prioritizing simplicity instead.",
"For instance, although we proposed utilizing all the intermediate layers of BERT and max pooling in a normal setting (indeed, it worked pretty well for most cases), a specific subset of the layers or another pooling method might bring better performance in a particular environment, as we observed in Section 4.4 that we could achieve higher numbers by employing mean pooling and excluding lower layers in the case of SentEval (refer to Appendix A.3 for details).",
"Therefore, in future work, it is encouraged to develop a systematic way of making more optimized design choices in specifying our method by considering the characteristics of target tasks.",
"Second, we expect that the effectiveness of contrastive learning in revising BERT can be improved further by properly combining different techniques developed for it.",
"As an initial attempt towards this direction, we conduct an extra experiment where we test the ensemble of back-translation and our self-guidance algorithm by inserting the original sentence into BERTT and its back-translation into BERTF when running our framework.",
"In Table 7, we show that the fusion of the two techniques generally results in better performance, shedding some light on our future research direction.",
"In this paper, we have proposed a contrastive learning method with self-guidance for improving BERT sentence embeddings.",
"Through extensive experiments, we have demonstrated that our method can enjoy the benefit of contrastive learning without relying on external procedures such as data augmentation or back-translation, succeeding in generating higher-quality sentence representations compared to competitive baselines.",
"Furthermore, our method is efficient at inference because it does not require any post-processing once its training is completed, and is relatively robust to domain shifts.",
"We would like to thank anonymous reviewers for their fruitful feedback.",
"We are also grateful to Jung-Woo Ha, Sang-Woo Lee, Gyuwan Kim, and other members in NAVER AI Lab in addition to Reinald Kim Amplayo for their insightful comments."
] | [
"abstain",
"objective",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"Sequence-to-sequence transduction is the core problem in language processing applications as diverse as semantic parsing, machine translation, and instruction following.",
"The neural network models that provide the dominant solution to these problems are brittle, especially in low-resource settings: they fail to generalize correctly or systematically from small datasets.",
"Past work has shown that many failures of systematic generalization arise from neural models' inability to disentangle lexical phenomena from syntactic ones.",
"To address this, we augment neural decoders with a lexical translation mechanism that generalizes existing copy mechanisms to incorporate learned, decontex-tualized, token-level translation rules.",
"We describe how to initialize this mechanism using a variety of lexicon learning algorithms, and show that it improves systematic generalization on a diverse set of sequence modeling tasks drawn from cognitive science, formal semantics, and machine translation.",
"1 1 Introduction Humans exhibit a set of structured and remarkably consistent inductive biases when learning from language data.",
"For example, in both natural language acquisition and toy language-learning problems like the one depicted in Fig. 1, human learners exhibit a preference for systematic and compositional interpretation rules (Guasti 2017, Chapter 4; Lake et al. 2019).",
"These inductive biases in turn support behaviors like one-shot learning of new concepts (Carey and Bartlett, 1978).",
"But in natural language processing, recent work has found that state-of-the-art neural models, while highly effective at in-domain prediction, fail to generalize in human-like ways when faced with rare phenomena 1 Our code is released under https://github.com/ ekinakyurek/lexical Train Test dax lug wif zup zup fep zup blicket lug ___ ?",
"and small datasets (Lake and Baroni, 2018), posing a fundamental challenge for NLP tools in the low-data regime.",
"Pause for a moment to fill in the missing labels in Fig.",
"1. While doing so, which training examples did you pay the most attention to?",
"How many times did you find yourself saying means or maps to ?",
"Explicit representations of lexical items and their meanings play a key role diverse models of syntax and semantics (Joshi and Schabes, 1997; Pollard and Sag, 1994; Bresnan et al., 2015).",
"But one of the main findings in existing work on generalization in neural models is that they fail to cleanly separate lexical phenomena from syntactic ones (Lake and Baroni, 2018).",
"Given a dataset like the one depicted in Fig. 1, models conflate (lexical) information about the correspondence between zup and y with the (syntactic) fact that y appears only in a sequence of length 1 at training time.",
"Longer input sequences containing the word zup in new syntactic contexts cause models to output tokens only seen in longer sequences (Section 5).",
"In this paper, we describe a parameterization for sequence decoders that facilitates (but does not enforce) the learning of context-independent word meanings.",
"Specifically, we augment decoder output layers with a lexical translation mechanism which generalizes neural copy mechanisms (e.g. See et al., 2017) and enables models to generate token-level translations purely attentionally.",
"While the lexical translation mechanism is quite general, we focus here on its ability to improve few-shot learning in sequence-to-sequence models.",
"On a suite of challenging tests of few-shot semantic parsing and instruction following, our model exhibits strong generalization, achieving the highest reported results for neural sequence models on datasets as diverse as COGS (Kim and Linzen 2020, with 24155 training examples) and Colors (Lake et al. 2019, with 14).",
"Our approach also generalizes to real-world tests of few-shot learning, improving BLEU scores (Papineni et al., 2002) by 1.2 on a low-resource EnglishChinese machine translation task (2.2 on test sentences requiring one-shot word learning).",
"In an additional set of experiments, we explore effective procedures for initializing the lexical translation mechanism using lexicon learning algorithms derived from information theory, statistical machine translation, and Bayesian cognitive modeling.",
"We find that both mutual-information-and alignmentbased lexicon initializers perform well across tasks.",
"Surprisingly, however, we show that both approaches can be matched or outperformed by a rule-based initializer that identifies high-precision word-level token translation pairs.",
"We then explore joint learning of the lexicon and decoder, but find (again surprisingly) that this gives only marginal improvements over a fixed initialization of the lexicon.",
"Introduces a new, lexicon-based output mechanism for neural encoderdecoder models.",
"Investigates and improves upon lexicon learning algorithms for initialising this mechanism.",
"Uses it to solve challenging tests of generalization in instruction following, semantic parsing and machine translation.",
"A great deal of past work has suggested that neural models come equipped with an inductive bias that makes them fundamentally ill-suited to human-like generalization about language data, especially in the low-data regime (e.g. Fodor et al., 1988; Marcus, 2018).",
"Our results suggest that the situation is more complicated: by offloading the easier lexicon learning problem to simpler models, neural sequence models are actually quite effective at modeling (and generalizing about) about syntax in synthetic tests of generalization and real translation tasks.",
"Systematic generalization in neural sequence models The desired inductive biases noted above are usually grouped together as systematicity but in fact involve a variety of phenomena: one-shot learning of new concepts and composition rules (Lake and Baroni, 2018), zero-shot interpretation of novel words from context cues (Gandhi and Lake, 2020), and interpretation of known concepts in novel syntactic configurations (Keysers et al., 2020; Kim and Linzen, 2020).",
"What they share is a common expectation that learners should associate specific production or transformation rules with specific input tokens (or phrases), and generalize to use of these tokens in new contexts.",
"Recent years have seen tremendous amount of modeling work aimed at encouraging these generalizations in neural models, primarily by equipping them with symbolic scaffolding in the form of program synthesis engines (Nye et al., 2020), stack machines (Grefenstette et al., 2015; Liu et al., 2020), or symbolic data transformation rules (Gordon et al., 2019; Andreas, 2020).",
"A parallel line of work has investigated the role of continuous representations in systematic generalization, proposing improved methods for pretraining (Furrer et al., 2020) and procedures for removing irrelevant contextual information from word representations (Arthur et al., 2016; Russin et al., 2019; Thrush, 2020).",
"The latter two approaches proceed from similar intuition to ours, aiming to disentangle word meanings from syntax in encoder representations via alternative attention mechanisms and adversarial training.",
"Our approach instead focuses on providing an explicit lexicon to the decoder; as discussed below, this appears to be considerably more effective.",
"Copying and lexicon learning In neural encoderdecoder models, the clearest example of benefits from special treatment of word-level production rules is the copy mechanism .",
"A great deal of past work has found that neural models Inputs Outputs Lexicon Entries A crocodile blessed William .",
"benefit from learning a structural copy operation that selects output tokens directly from the input sequence without requiring token identity to be carried through all neural computation in the encoder and the decoder.",
"These mechanisms are described in detail in Section 3, and are widely used in models for language generation, summarization and semantic parsing.",
"Our work generalizes these models to structural operations on the input that replace copying with general context-independent token-level translation.",
"As will be discussed, the core of our approach is a (non-contextual) lexicon that maps individual input tokens to individual output tokens.",
"Learning lexicons like this is of interest in a number of communities in NLP and language science more broadly.",
"A pair of representative approaches (Brown et al., 1993; Frank et al., 2007) will be discussed in detail below; other work on lexicon learning for semantics and translation includes Liang et al. (2009); Goldwater (2007); Haghighi et al. (2008) among numerous others.",
"Finally, and closest to the modeling contribution in this work, several previous papers have proposed alternative generalized copy mechanisms for tasks other than semantic lexicon learning.",
"Concurrent work by Prabhu and Kann (2020) introduces a similar approach for grapheme-to-phoneme translation (with a fixed functional lexicon rather than a trainable parameter matrix), and Nguyen and Chiang (2018) and Gu et al. (2019) describe less expressive mechanisms that cannot smoothly interpolate between lexical translation and ordinary decoding at the token level.",
"Pham et al. (2018) incorporate lexicon entries by rewriting input sequences prior to ordinary sequence-to-sequence translation.",
"Akyrek et al. (2021) describe a model in which a copy mechanism is combined with a retrieval-based generative model; like the present work, that model effectively disentangles syntactic and lexical information by using training examples as implicit representations of lexical correspondences.",
"We generalize and extend this previous work in a number of ways, providing a new parameterization of attentive token-level translation and a detailed study of initialization and learning.",
"But perhaps the most important contribution of this work is the observation that many of the hard problems studied as compositional generalization have direct analogues in more conventional NLP problems, especially machine translation.",
"Research on system-aticity and generalization would benefit from closer attention to the ingredients of effective translation at scale.",
"This paper focuses on sequence-to-sequence language understanding problems like the ones depicted in Table 1, in which the goal is to map from a natural language input x = [ x 1 , x 2 , . . . , x n ] to a structured output y = [ y 1 , y 2 , . . . , y m ] a logical form, action sequence, or translation.",
"We assume input tokens x i are drawn from a input vocabulary V x , and output tokens from a corresponding output vocabulary V y .",
"Neural encoderdecoders Our approach builds on the standard neural encoderdecoder model with attention (Bahdanau et al., 2014).",
"In this model, an encoder represents the input sequence [ x 1 , . . . , x n ] as a sequence of representations [ e 1 , . . . , e n ] e = encoder ( x ) (1) Next, a decoder generates a distribution over output sequences y according to the sequentially: log p ( y | x ) = y (cid:88) i =1 log p ( y i | y <i , e, x ) (2) Here we specifically consider decoders with attention .",
"2 When predicting each output token y i , we assign each input token an attention weight ji as in Eq.",
"(3).",
"Then, we construct a context representation c i as the weighted sum of encoder representations e i : ji exp( h (cid:62) i W att e j ) (3) c i = | x | (cid:88) j =1 ji e j (4) The output distribution over V y , which we denote p write ,i , is calculated by a final projection layer: p ( y i = w | x ) = p write i ( w ) exp( W write [ c i , h i ]) (5) Copying A popular extension of the model described above is the copy mechanism , in which output tokens can be copied from the input sequence in addition to being generated directly by the decoder (Jia and Liang, 2016; See et al., 2017).",
"Using the decoder hidden state h i from above, the model first computes a gate probability : p gate = ( w (cid:62) gate h i ) (6) and then uses this probability to interpolate between the distribution in Eq.",
"(5) and a copy distribution that assigns to each word in the output vocabulary a probability proportional to that word's weight in the attention vector over the input: p copy ( y i = w | x ) = | x | (cid:88) j =1 1 [ x j = w ] ji (7) p ( y i = w | x ) = p gate p write ( y i = w | x ) + (1 p gate ) p copy ( y i = w | x ) (8) (note that this implies V y V x ).",
"Content-independent copying is particularly useful in tasks like summarization and machine translation where rare words (like names) are often reused between the input and output.",
"2 All experiments in this paper use LSTM encoders and decoders, but it could be easily integrated with CNNs or transformers (Gehring et al. 2017; Vaswani et al. 2017).",
"We only assume access to a final layer h i , and final attention weights i ; their implementation does not matter.",
"Our model: Lexical translation When the input and output vocabularies are significantly different, copy mechanisms cannot provide further improvements on a sequence-to-sequence model.",
"However, even for disjoint vocabularies as in Fig. 1, there may be strict correspondences between individual words on input and output vocabularies, e.g. zup (cid:55) y in Fig.",
"1. Following this intuition, the lexical translation mechanism we introduce in this work extends the copy mechanism by introducing an additional layer of indirection between the input sequence x and the output prediction y i as shown in Fig.",
"2. Specifically, after selecting an input token x j V x , the decoder can translate it to a context-independent output token V y prior to the final prediction.",
"We equip the model with an additional lexicon parameter L, a |V x | |V y | matrix in which (cid:80) w L vw = 1 , and finally define p lex ( y i = w | x ) = | x | (cid:88) j =1 L x j w ji (9) p ( y i = w | x ) = p gate p write ( y i = w | x ) + (1 p gate ) p lex ( y i = w | x ) (10) The model is visualized in Fig.",
"2. Note that when V x = V y and L = I is diagonal, this is identical to the original copy mechanism.",
"However, this approach can in general be used to produce a larger set of tokens.",
"As shown in Table 1, coherent token-level translation rules can be identified for many tasks; the lexical translation mechanism allows them to be stored explicitly, using parameters of the base sequence-to-sequence model to record general structural behavior and more complex, context-dependent translation rules.",
"The lexicon parameter L in the preceding section can be viewed as an ordinary fully-connected layer inside the copy mechanism, and trained end-to-end with the rest of the network.",
"As with other neural network parameters, however, our experiments will show that the initialization of the parameter L significantly impacts downstream model performance, and specifically benefits from initialization with a set of inputoutput mappings learned with an offline lexicon learning step.",
"Indeed, while not widely used in neural sequence models (though c.f. Section 2), lexicon-based initialization was a standard feature of many complex non-neural sequence transduction models, including semantic parsers (Kwiatkowski et al., 2011) and phrase-based machine translation systems (Koehn et al., 2003).",
"But an important distinction between our approach and these others is the fact that we can handle outputs that are not (transparently) compositional.",
"Not every fragment of an input will correspond to a fragment of an output: for example, thrice in SCAN has no corresponding output token and instead describes a structural transformation.",
"Moreover, the lexicon is not the only way to generate: complex mappings can also be learned by p write without going through the lexicon at all.",
"Thus, while most existing work on lexicon learning aims for complete coverage of all word meanings, the model described in Section 3 benefits from a lexicon with high-precision coverage of rare phenomena that will be hard to learn in a normal neural model.",
"Lexicon learning is widely studied in language processing and cognitive modeling, and several approaches with very different inductive biases exist.",
"To determine how to best initialize L , we begin by reviewing three algorithms in Section 4.1, and identify ways in which each of them fail to satisfy the high precision criterion above.",
"In Section 4.2, we introduce a simple new lexicon learning rule that addresses this shortcoming.",
"Statistical alignment In the natural language processing literature, the IBM translation models (Brown et al., 1993) have served as some of the most popular procedures for learning token-level inputoutput mappings.",
"While originally developed for machine translation, they have also been used to initialize semantic lexicons for semantic parsing (Kwiatkowski et al., 2011) and grapheme-to-phoneme conversion (Rama et al., 2009).",
"We initialize the lexicon parameter L using Model",
"2. Model 2 defines a generative process in which source words y i are generated from target words x j via latent alignments a i .",
"Specifically, given a (source, target) pair with n source words and m target words, the probability that the target word i is aligned to the source word j is: p ( a i = j ) exp (cid:0) (cid:12)(cid:12)(cid:12) i m j n (cid:12)(cid:12)(cid:12)(cid:1) (11) Finally, each target word is generated by its aligned source word via a parameter : p ( y i = w ) = ( v, x a i ) .",
"Alignments a i and lexical parameters can be jointly estimated using the expectation maximization algorithm (Dempster et al., 1977).",
"In neural models, rather than initializing lexical parameters L directly with corresponding IBM model parameters , we run Model 2 in both the forward and reverse directions, then extract counts by intersecting these alignments and applying a softmax with temperature : L vw exp (cid:0) 1 (cid:88) ( x,y ) | y | (cid:88) i =1 1 [ x a i = v ] 1 [ y i = w ] (cid:1) (12) For all lexicon methods discussed in this paper, if an input v is not aligned to any output w , we map it to itself if V x V y .",
"Otherwise we align it uniformly to any unmapped output words (a mutual exclusivity bias , Gandhi and Lake 2020).",
"Mutual information Another, even simpler procedure for building a lexicon is based on identifying pairs that have high pointwise mutual information .",
"We estimate this quantity directly from co-occurrence statistics in the training corpus: pmi ( v ; w ) = log # ( v, w ) # ( v ) # ( w ) + log | D train | (13) where #( w ) is the number of times the word w appears in the training corpus and #( w, v ) is the number of times that w appears in the input and v appears in the output.",
"Finally, we populate the parameter L via a softmax transformation: L vw exp((1 / ) pmi ( v ; w )) .",
"Bayesian lexicon learning Last, we explore the Bayesian cognitive model of lexicon learning described by Frank et al. (2007).",
"Like IBM model 2, this model is defined by a generative process; here, however, the lexicon itself is part of the generative model.",
"A lexicon (cid:96) is an (unweighted, many-to-many) map defined by a collection of pairs (x, y) with a description length prior: p ( (cid:96) ) e | (cid:96) | (where | (cid:96) | is the number of (input, output) pairs in the lexicon).",
"As in Model 2, given a meaning y and a natural-language description x , each x i is generated independently.",
"We define the probability of a word being used non-referentially as p NR ( x i | (cid:96) ) 1 if x i (cid:54) (cid:96) and otherwise.",
"The probability of being used referentially is: p R ( x j | y i , (cid:96) ) 1 ( x j ,y i ) (cid:96) .",
"Finally, p ( x j | y i , (cid:96) ) = (1 ) p NR ( x j | (cid:96) ) + | y | (cid:88) i =1 p R ( x j | y i , (cid:96) ) (14) To produce a final lexical translation matrix L for use in our experiments, we set L vw exp((1 / ) p (( v, w ) (cid:96) )) : each entry in L is the posterior probability that the given entry appears in a lexicon under the generative model above.",
"Parameters are estimated using the MetropolisHastings algorithm, with details described in Appendix C. 4.2 A Simpler Lexicon Learning Rule Example lexicons learned by the three models above are depicted in Fig. 3 for the SCAN task shown in Table",
"1. Lexicons learned for remaining tasks can be found in Appendix B. It can be seen that all three models produce errors: the PMI and Bayesian lexicons contain too many entries (in both cases, numbers are associated with the turn right action and prepositions are associated with the turn left action).",
"For the IBM model, one of the alignments is confident but wrong, because the around preposition is associated with turn left action.",
"In order to understand these errors, and to better characterize the difference between the demands of lexical translation model initializers and past lexicon learning schemes, we explore a simple logical procedure for extracting lexicon entries and thricetwice oppositeafteraroundrightwalkrunleftlookjump IBMModel-2 PMIIRIG HTIWALKIRU NIL EFTILO O KIJUMP and thricetwice oppositeafteraroundrightwalkrunleftlookjump Bayesian IRIG HTIWALKIRU NIL EFTILO O KIJUMP Simple Figure 3: Learned lexicons for the around right split in SCAN ( = 0 . 1 ).",
"that, surprisingly, matchers or outperforms all three baseline methods in most of our experiments.",
"What makes an effective, precise lexicon learning rule?",
"As a first step, consider a maximally restrictive criterion (which we'll call C 1 ) that extracts only pairs ( v, w ) for which the presence of v in the input is a necessary and sufficient condition for the presence of w in the output.",
"nec.",
"( v, w ) = xy.",
"( w y ) ( v x ) (15) suff.",
"( v, w ) = xy.",
"( v x ) ( w y ) (16) C 1 ( v, w ) = nec.",
"( v, w ) suff.",
"( v, w ) (17) C 1 is too restrictive: in many language understanding problems, the mapping from surface forms to meanings is many-to-one (in Table 1, both blessed and bless are associated with the logical form bless ).",
"Such mappings cannot be learned by the algorithm described above.",
"We can relax the necessity condition slightly, requiring either that v is a necessary condition for w , or is part of a group that collectively explains all occurrences of w : no-winner ( w ) = (cid:64) v (cid:48) .",
"C 1 ( v (cid:48) , w ) (18) C 2 ( v, w ) = suff.",
"( v, w ) ( nec.",
"( v, w ) no-win.",
"( w )) (19) As a final refinement, we note that C 2 is likely to capture function words that are present in most sentences, and exclude these by restricting the lexicon to words below a certain frequency threshold: C 3 = C 2 (cid:12)(cid:12) { v (cid:48) : suff.",
"The lexicon matrix L is computed by taking the word co-occurrence matrix, zeroing out all entries where C 3 does not hold, then computing a softmax: L vw C 3 ( v, w ) exp((1 / ) #( v, w )) .",
"Surprisingly, as shown in Fig. 3 and and evaluated below, this rule (which we label Simple) produces the most effective lexicon initializer for three of the four tasks we study.",
"The simplicity (and extreme conservativity) of this rule highlight the different demands on L made by our model and more conventional (e.g. machine translation) approaches: the lexical translation mechanism benefits from a small number of precise mappings rather than a large number of noisy ones.",
"We investigate the effectiveness of the lexical translation mechanism on sequence-to-sequence models for four tasks, three focused on compositional generalization and one on low-resource machine translation.",
"In all experiments, we use an LSTM encoderdecoder with attention as the base predictor.",
"We compare our approach (and variants) with two other baselines: GECA (Andreas 2020; a data augmentation scheme) and SynAtt (Russin et al. 2019; an alternative seq2seq model parameteriza-tion).",
"Hyper-parameter selection details are given in the Appendix C. Unless otherwise stated, we use = 0 and do not fine-tune L after initialization.",
"Task The Colors sequence translation task (see Appendix A for full dataset) was developed to measure human inductive biases in sequence-to-sequence learning problems.",
"It poses an extreme test of low-resource learning for neural sequence models: it has only 14 training examples that combine four named colors and three composition operations that perform concatenation, repetition and wrapping.",
"Liu et al. (2020) solve this dataset with a symbolic stack machine; to the best of our knowledge, our approach is the first pure neural sequence model to obtain non-trivial accuracy.",
"Results Both the Simple and IBMM2 initializers produce a lexicon that maps only color words to colors.",
"Both, combined with the lexical translation mechanism, obtain an average test accuracy of 79% across 16 runs, nearly matching the human accuracy of 81% reported by Lake et al. (2019).",
"The two test examples most frequently predicted incorrectly require generalization to longer sequences than seen during training.",
"More details (includ-ing example-level model and human accuracies) are presented in the appendix Appendix A).",
"These results show that LSTMs are quite effective at learning systematic sequence transformation rules from 3 examples per function word when equipped with lexical translations.",
"Generalization to longer sequences remains as an important challenge for future work.",
"Task SCAN (Lake and Baroni, 2018) is a larger collection of tests of systematic generalization that pair synthetic English commands (e.g. turn left twice and jump ) to action sequences (e.g. LTURN LTURN IJUMP ) as shown in Table",
"1. Following previous work, we focus on the jump and around right splits, each of which features roughly 15,000 training examples, and evaluate models' ability to perform 1-shot learning of new primitives ( jump ) and zero-shot interpretation of composition rules ( around right ).",
"While these tasks are now solved by a number of specialized approaches, they remain a challenge for conventional neural sequence models, and an important benchmark for new models.",
"Results In the jump split, all initializers improve significantly over the base LSTM when combined with lexical translation.",
"Most methods achieve 99% accuracy at least once across seeds.",
"These results are slightly behind GECA (in which all runs succeed) but ahead of SynAtt.",
"3 Again, they show that lexicon learning is effective for systematic generalization, and that simple initializers (PMI and Simple) outperform complex ones.",
"Task COGS (Compositional Generalization for Semantic Parsing; Kim and Linzen 2020) is an automatically generated English-language semantic parsing dataset that tests systematic generalization in learning language-to-logical-form mappings.",
"It includes 24155 training examples.",
"Compared to the Colors and SCAN datasets, it has a larger vocabulary (876 tokens) and finer-grained inventory of syntactic generalization tests (Table 3).",
"Results Notably, because some tokens appear in both inputs and logical forms in the COGS 3 SynAtt results here are lower than reported in the original paper, which discarded runs with a test accuracy of 0%.",
"task, even a standard sequence-to-sequence model with copying significantly outperforms the baseline models in the original work of Kim and Linzen (2020), solving most tests of generalization over syntactic roles for nouns (but performing worse at generalizations over verbs, including passive and dative alternations).",
"As above, the lexical translation mechanism (with any of the proposed initializers) provides further improvements, mostly for verbs that baselines model incorrectly (Table 3).",
"Task To demonstrate that this approach is useful beyond synthetic tests of generalization, we evaluate it on a low-resource EnglishChinese translation task (the Tatoeba 4 dataset processed by Kelly 2021).",
"For our experiments, we split the data randomly into 19222 training and 2402 test pairs.",
"Results Results are shown in Table",
"4. Models with a lexical translation mechanism obtain modest improvements (up to 1.5 BLEU) over the baseline.",
"Notably, if we restrict evaluation to test sentences 4 https://tatoeba.org/ ENG-CHN full 1-shot LSTM 24.18 0 .",
"featuring English words that appeared only once in the training set, BLEU improves by more than 2 points, demonstrating that this approach is particularly effective at one-shot word learning (or fast mapping ; Carey and Bartlett 1978).",
"Fig. 2 shows an example from this dataset, in which the model learns to reliably translate Saturn from a single training example.",
"GECA, which makes specific generative assumptions about data distributions, does not generalize to a more realistic low resource MT problem.",
"However, the lexical translation mechanism remains effective in natural tasks with large vocabularies and complex grammars.",
"In all the experiments above, the lexicon was dis-cretized ( = 0 ) and frozen prior to training.",
"In this final section, we revisit that decision, evaluating whether the parameter L can be learned from scratch, or effectively fine-tuned along with decoder parameters.",
"Experiments in this section focus on the COGS dataset.",
"Offline initialization of the lexicon is crucial.",
"Rather than initializing L using any of the algorithms described in Section 3, we initialized L to a uniform distribution for each word and optimized COGS LSTM 0.51 0 .",
"it during training.",
"This improves over the base LSTM ( Uniform in Table 5), but performs significantly worse than pre-learned lexicons.",
"Benefits from fine-tuning are minimal.",
"We first increased the temperature parameter to 0.1 (pro-viding a soft lexicon); this gave a 1% improvement on COGS (Table",
"5. Soft ).",
"Finally, we updated this soft initialization via gradient descent; this provided no further improvement (Table 5, Learned ).",
"One important feature of COGS (and other tests of compositional generalization) is perfect training accuracy is easily achieved; thus, there is little pressure on models to learn generalizable lexicons.",
"This pressure must instead come from inductive bias in the initializer.",
"We have described a lexical translation mechanism for representing token-level translation rules in neural sequence models.",
"We have additionally described a simple initialization scheme for this lexicon that outperforms a variety of existing algorithms.",
"Together, lexical translation and proper initialization enable neural sequence models to solve a diverse set of tasksincluding semantic parsing and machine translationthat require 1-shot word learning and 0-shot compositional generalization.",
"Future work might focus on generalization to longer sequences, learning of atomic but non-concatenative translation rules, and online lexicon learning in situated contexts.",
"This work was supported by the Machine-LearningApplications initiative at MIT CSAIL and the MITIBM Watson AI lab.",
"Computing resources were provided by a gift from NVIDIA through the NVAIL program and by the Lincoln Laboratory Supercloud."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"objective",
"result",
"result",
"objective",
"objective",
"objective",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"objective",
"other",
"objective",
"method",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"Many tasks aim to measure MACHINE READING COMPREHENSION (MRC), often focusing on question types presumed to be difficult.",
"Rarely, however, do task designers start by considering what systems should in fact comprehend.",
"In this paper we make two key contributions.",
"First, we argue that existing approaches do not adequately define comprehension; they are too unsystematic about what content is tested.",
"Second, we present a detailed definition of comprehensiona TEMPLATE OFUNDERSTANDING for a widely useful class of texts, namely short narratives.",
"We then conduct an experiment that strongly suggests existing systems are not up to the task of narrative understanding as we define it.",
"Over the past few years, neural models (e.g., Chen et al., 2016; Devlin et al., 2019; Liu et al., 2019) have begun to match or even exceed human performance on MACHINE READING COMPREHENSION (MRC) benchmarks.",
"In these tasks, systems demonstrate their comprehension of a passage by answering questions about it.",
"Yet despite recent successes, MRC appears far from solved: systems continue to make basic, sometimes baffling mistakes, and they fail to generalize to new data.",
"Such shortcomings have motivated a flurry of new MRC tasks, each designed to confront systems with questions deemed challenging for current methods.",
"For example, tasks may ask questions requiring commonsense reasoning (Huang et al., 2019), multihop reasoning (Welbl et al., 2018), or inferences based on a second passage (Lin et al., 2019).",
"This line of research assumes that ever-more-difficult question-answering tasks will ultimately lead to more robust and useful reading comprehension.",
"We argue that, while the question-answering * Equal contributions.",
"format can be a fine choice for how to test comprehension, using difficulty as the basis for what to test is fundamentally flawed.",
"To put it provocatively, the dominant MRC research paradigm is like trying to become a professional sprinter by glancing around the gym and adopting any exercises that look hard.",
"The training may end up exercising some relevant muscles, but it is far too haphazard to achieve the ultimate goal.",
"Like athletic training, MRC tasks are not an end in themselves; ultimately, they are meant to lead to real-world applications.",
"Current tasks may suffice for sufficiently similar applicationse.g., chatbots that look up customer questions in product documentation.",
"But many proposed NLP applications hinge on deeper comprehension.",
"Early work (e.g., Dyer, 1982) pointed to examples like assistance with legal disputes and service contracts; more recent work suggests applications such as summarizing a patient's clinical timeline (Jung et al., 2011).",
"For such complex applications, machines will need to manipulate rich models of the world evoked by the texte.g., to compare a claimant's narrative to legal standards, or to build a causal model of a patient's condition.",
"From this broader perspective, the current paradigm falls short.",
"Specifically, we claim that in the quest for difficulty, task designers overlook the issue of what content what information expressed, implied, or relied on by the passagesystems should comprehend.",
"MRC datasets are usually constructed by having humans cast about for supposedly tricky questions, most often questions based on reasoning.",
"But the questions that result are scattershot, offering little assurance that even a high-scoring system has achieved a useful and robust understanding.",
"We advocate for a different approach.",
"We propose that the first step in defining MRC tasks should be specifying what content a system would likely need to understand for a given class of applications.",
"This paper demonstrates such an approach for applications that involve understanding narratives.",
"1 After reviewing existing approaches to constructing MRC datasets (2), we argue for narratives as a valuable MRC testbed (3.1).",
"Then, inspired by cognitive science research on reading comprehension, we propose a template of understanding (ToU) for storiesan account of what an internal model of a story should minimally contain (3.2).",
"We also suggest ways to operationalize our ToU as a story comprehension task (4).",
"Finally, we show evidence from a pilot ToU-based task that current MRC models are not up to the challenge (5).",
"This paper addresses how MRC tests can be made more systematic.",
"Accordingly, we review existing tasks grouped by their data collection methods.",
"We argue that each category falls short of testing a useful body of content in a satisfying way.",
"By far the most popular strategy for generating MRC questions is to have humansusually crowd workers, but sometimes trained annotatorsthink of questions about each passage.",
"The most straightforward version of this method gives annotators little to no guidance regarding what questions to ask.",
"One early example is the TREC-8 dataset (Voorhees and Tice, 2000).",
"In the more recent SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) entailment tasks, the only constraint on crowd workers was that they produce one entailed, one contradicted, and one neutral hypothesis for each premise sentence.",
"2 Similarly, the workers who assembled NewsQA (Trischler et al., 2017) were told only that the questions had to be answerable with short phrases, and workers for SQuAD (Rajpurkar et al., 2016) were simply given a good and a bad example and encouraged to use original wording.",
"1 We will use narrative and story interchangeably, roughly following the Wikipedia definition: A narrative or story is an account of a series of related events, experiences, or the like, whether true...or fictitious. 2 Parts of the original RTE datasets (Dagan et al., 2006, etc.) were generated more systematically, but only in the sense that the outputs of NLP tools (e.g., translation or information extraction systems) were recorded as correct/incorrect examples of entailment.",
"Little attention was paid to subject matter.",
"The problem with such an open-ended generation process is that, absent stronger guidance, people tend to write simple questions that can be answered using lexical cues.",
"(See, e.g., the dataset analysis in Rajpurkar et al., 2016.)",
"This makes the tasks questionable measures of comprehension.",
"The dominant solution is to incorporate trickier twists.",
"NarrativeQA (Kocisky et al., 2018) and DuoRC (Saha et al., 2018) reduce lexical similarity between questions and passages by showing annotators only a second passage about the same events.",
"Other datasets emphasize reasoning presumed to be difficult, such as incorporating information from multiple parts of the text.",
"MCTest (Richardson et al., 2013) and MultiRC (Khashabi et al., 2018) ask for questions that rely on multiple sentences; ROPES (Lin et al., 2019) has annotators apply information from one passage to write questions on a second; and HotpotQA (Yang et al., 2018b) and QASC (Khot et al., 2019) require multi-hop reasoning.",
"Other forms of reasoning tested include coreference resolution (Quoref, Dasigi et al., 2019; Winograd Schema Challange, Levesque et al., 2012), numerical reasoning (DROP, Dua et al., 2019), and commonsense reasoning (Cosmos QA, Huang et al., 2019).",
"Tasks can also be made harder with devices such as unanswerable questions (SQuADRUn, Rajpurkar et al., 2018; NewsQA; CosmosQA) and filtering questions with an adversarial baseline (DROP; Quoref; QASC).",
"These twists do make MRC harder.",
"But to pursue hard questions is to overlook why easy questions seemed inadequate in the first place: MRC tasks are a means to an end, namely useful applications, and easy questionse.g., questions that depend only on lexical cuesdo not suffice for that end.",
"The techniques above may help by guiding annotators to a different space of questions: intuition suggests that some of these harder questions are indeed useful ones.",
"But such techniques are an incomplete solution, as difficulty is a weak proxy for utility.",
"What matters is not the system's sophistication per se; it is the alignment between the questions the system can answer and the ones a given application needs it to.",
"Designing for difficulty still gives little assurance of such alignment.",
"Perhaps a truly random walk through question space would eventually cover a representative set of useful questions, but annotators are biased toward questions that humans find interesting (see Gordon and Van Durme, 2013; Misra et al., 2016; Zhang et al., 2017).",
"They do not think to ask questions whose answers seem obvious, even when those answers are essential to comprehension.",
"If we do not delineate such facts and evaluate systems' ability to manipulate them, we will never be satisfied that the systems have adequately understood the text.",
"A second approach is to find questions in the wild, then retrospectively collect documents containing the answers.",
"This is the approach of BoolQ (Clark et al., 2019) and MS MARCO (Nguyen et al., 2016), which compile search engine queries, and of ELI5 (Fan et al., 2019), which harvests questions from Reddit's Explain Like I'm Five forum.",
"Such datasets are clearly useful for answering common queries, a valuable application class in its own right.",
"For more complex applications, however, common queries are, if anything, less thorough than annotators at probing important elements of understanding (particularly aspects humans find obvious).",
"The mismatch between questions and passage content is exacerbated by finding the passages retrospectively: the questions do not even attempt to test most of what each passage discusses, making them an insufficient measure of MRC.",
"The third strategy is to pull questions from tests written for humans.",
"Examples include the early Deep Read corpus (Hirschman et al., 1999); the more recent TriviaQA (Joshi et al., 2017) and SearchQA (Dunn et al., 2017) datasets, which mine collections of trivia questions; the AI2 Reasoning Challenge (ARC; Clark et al., 2018), which asks questions from standardized science tests; and RACE (Lai et al., 2017), which draws from English learning materials for Chinese school students.",
"Our chief concern about this approach echoes our concerns from 2.1: tests designed for humans rarely bother to test content that most humans find obvious.",
"Accordingly, they gloss over vast swaths of understanding that machines do not yet have but which may be critical to applications.",
"In addition, SearchQA, TriviaQA, and ARC find passages retrospectively, so again, the questions they ask only tangentially graze the content of each passage.",
"Several projects generate questions algorithmically.",
"The CNN / Daily Mail datasets (Hermann et al., 2015) and ReCoRD (Zhang et al., 2018) produce cloze-style questions over news passages by masking out entities from summaries and below-the-fold sentences.",
"ComplexWebQuestions (CWQ; Talmor and Berant, 2018) and WikiHop (Welbl et al., 2018) test for multi-hop reasoning by walking a structured knowledge base.",
"Finally, bAbI (Weston et al., 2016) generates short texts and questions from a simple simulation of characters moving around.",
"Each algorithm encodes assumptions about what is worth asking.",
"In theory, then, the algorithmic approach could produce a satisfying MRC test: given appropriate inputs, the algorithm could aim to generate questions that cover important content.",
"Indeed, our proposal in 4.1 can be seen as a question generation algorithm to be run by humans.",
"In practice, however, algorithmic approaches have de-emphasized content.",
"CNN / Daily Mail and ReCoRD capture explicit assertions about maskable entities, which do not amount to a principled body of content.",
"The algorithms behind CWQ and WikiHop at least take as input some body of content, namely knowledge graphs.",
"But the graphs include only a fractionagain, not a principled oneof the associated documents' content, and the questions are further restricted to rely on multihop reasoning.",
"Multi-hop reasoning is no doubt a major error source for MRC, but applications are driven by what propositions must be extracted; whether each proposition takes zero inference steps or seven is immaterial.",
"Accordingly, multi-hop questions are worth investigating, but they are not a sufficiently well-motivated body of content to constitute a measure of reading comprehension.",
"Similar remarks can be made about most of bAbI's 20 tasks: grounded in simulations, their question generation algorithms start from known content, but target forms of reasoning.",
"However, the tasks concerning time, positions, sizes, pathfind-ing, and motivations are closer to our content-first question generation strategy.",
"These tasks are not driven by applications, and their synthetic passages are unrealistically simple, but among existing datasets, they are closest to our proposal.",
"The most clear-cut way to test reading comprehension would be to select passages, describe what should be comprehended from them, and design tests for that understanding.",
"Yet few MRC datasets have even approximated this approach.",
"Many impose little structure on what content is tested; the rest pick some difficult form(s) of analysis or linguistic phenomena, but rarely consider downstream goals to determine what the questions should be about .",
"Metrics for difficult reasoning and linguistic phenomena (see, e.g., Gardner et al., 2019) are useful, but only as tools for error analysis and mitigation; they are not top-line performance metrics.",
"In addition, many datasets to date suffer from two other problems: 1) they select passages after the questions are asked, meaning the questions test comprehension of only small portions of the passages; and/or 2) they ask very few questions whose answers are obvious to humans.",
"These issues of content scope also intersect with issues of format.",
"Many tasks have adopted a span extraction format, including TREC QA, NewsQA, and (most notably) SQuAD and its successors.",
"This format immediately rules out questions about inferred events or entities, which may be essential to a complete interpretation.The main alternative is multiple choice (MC), used in tasks such as Cosmos QA, RACE, ARC, WikiHop, and every task in GLUE (Wang et al., 2019b) and SuperGLUE (Wang et al., 2019a).",
"But MC has its own problem of providing extra hints via answer choices.",
"Our approach starts from the content of a passage, which we define as the information it expresses, implies, or relies on.",
"Specifically, we propose that task designers lay out a minimal body of content that MRC systems should demonstrate they understand.",
"Exactly what that content is will vary from passage to passage, of course, but the key is to define a TEMPLATE OF UNDERSTANDING (ToU): a set of question templates that can be filled in with specific events and entities for any given passage.",
"The answers to the fleshed-out questions will constitute a floor of understanding for the passagea plausible lower bound on what content machines ought to comprehend.",
"The natural next question is what content the ToU should cover.",
"System needs will vary by application.",
"To advance MRC writ large without limiting ourselves to a single application, we propose selecting a class of texts where one could reasonably predict a priori what content would be useful for applications.",
"In the rest of this section, we endorse fictional narratives as a particularly promising class of texts and propose a ToU for them.",
"3 3.1 The case for stories Stories have several convenient properties that recommend them as a testbed for MRC.",
"Most importantly, applications that involve comprehending stories are numerous and diverse.",
"Consider a legal aid tool: to assess whether a lawsuit may be warranted, it would have to comprehend an account of the events in question.",
"Likewise, a tool that finds candidates for medical trials would need to read each patient history.",
"(Appendix A fleshes out these scenarios.)",
"These examples are not exceptional; applications in other domains will depend on stories in customer complaints, intelligence dispatches, financial news, and many other document types.",
"Humans tend to think and communicate in terms of stories (see, e.g., Haidt, 2013; Mateas and Sengers, 1999; Bruner, 1991; Eck, 2006), so it is unsurprising that stories are ubiquitous in the content we want NLU tools to help us with.",
"Additionally, stories come with a strong prior from cognitive science about what elements of understanding will be useful.",
"Research on human reading comprehension (e.g., Graesser et al., 1994; Zwaan et al., 1995) suggests that humans attend primarily to the timeline of events, to the locations of entities and events, and to the causes and motivations of events and actions.",
"For applications that involve story comprehension, we can expect that machines will need to understand these same dimensions.",
"We can thus design a principled ToU for stories even without specifying an application.",
"Stories' content also makes them a particularly compelling demonstration of understanding, for two reasons.",
"First, cognitive science suggests that humans make more inferences when reading narrative text than expository text (Graesser et al., 1994).",
"In particular, a story entails a highly structured network of relations (timelines, causality, etc.).",
"Thus, stories do exercise abilities beyond simple factoid extraction.",
"Second, stories rely on a large body of implicit world knowledge.",
"If a system is able to use and express that knowledge when reading stories, it will likely be able to apply the same knowledge even when comprehending other kinds of texts.",
"3 To be clear, we are not claiming that fictional narratives are themselves an application; only that they are a class of texts that are useful for many applications.",
"found in corpora, so systems must rely on comprehending the text (Richardson et al., 2013).",
"Accordingly, we suggest using fictional narratives as the basis for developing a ToU and evaluating MRC.",
"We propose four overlapping clusters of questions for story comprehension, corresponding to the four elements identified by Zwaan et al. (1995) as the ones humans attend to when reading stories.",
"Further support for these questions, particularly the last two, comes from early work in computational story understanding: Schank and Abelson (1977) identify causal chains, plans and goals as crucial elements of understanding multi-sentence stories.",
"1. Spatial: Where are entities positioned over time, relative to landmarks and each other?",
"How are they physically oriented?",
"And where do events take place?",
"2. Temporal: What events and sub-events occur, and in what order?",
"Also, for what blocks of that timeline do entities' states hold true?",
"3. Causal: How do events and states lead mechanistically to the events and states described or implied by the text?",
"4. Motivational: How do agents' beliefs, desires, and emotions lead to their actions?",
"These question templates form the ToU.",
"Systems should ideally be able to answer them about all entities and events that the story mentions or implies (though of course some entities/events are more important than others; see 4.1).",
"We do not have a separate category for who did what to whom information, but we expect strong performance on the ToU to hinge on such analysis.",
"In particular, much of this information is captured in the characterization of events for temporal questions.",
"Of course, these four facets do not cover everything one might comprehend.",
"They include nothing about the story's message, or how it resembles other stories, or even most counting questions.",
"The ToU merely provides a lower bound on what is needed.",
"That said, many forms of reasoning (e.g., counting) can be reduced to deterministically manipulating the answers to multiple ToU questions.",
"Spatial (sample entries) : Rover is in the yard from when he runs out the door until he runs inside.",
"Rover is in the house from when he runs inside until the end of the story.",
"Temporal (sample entries) : Allie arrives just before Rover runs outside.",
"Rover barks just before he runs inside.",
"It is still raining at the end of the story.",
"Motivational (sample entry) : Rover runs inside, rather than staying put, because: If he runs inside, he will be inside, whereas if he does not he will be outside, because: * Rover is outside.",
"* Running to a place results in being there.",
"If Rover is inside, he will not get rained on, whereas if he is outside he will, because: * It is raining.",
"* When it is raining, things that are outside tend to get rained on, whereas things inside do not.",
"Rover would prefer not getting rained on to getting rained on, because: * Most dogs prefer not to get rained on.",
"However, there remains the challenge of opera-tionalizing the frameworki.e., of rigorously assessing whether a machine has that understanding.",
"We do not claim to have solved this problem, but in this section we discuss two broad directions for further development: evaluating based on annotated answers to ToU questions and asking untrained humans to rank different answers.",
"These approaches might even be combined to offer complementary perspectives on system performance.",
"One class of approaches starts with trained annotators writing plain-English answers to each ToU question.",
"The annotators are given guidelines for instantiating the ToU on new stories and for making answers detailed and thorough.",
"We call an anno-tator's answer document a RECORD OF UNDERSTANDING (RoU); see Figure 1 for an example.",
"Conceptually, answering temporal and spatial questions is straightforward, but the causal and motivational questions require more definition.",
"People accept many kinds of answers to such questions.",
"It is therefore important to clarify what a good answer should includei.e., what causal or motivational facts an MRC system should comprehend.",
"We base our account of these questions on the philosophical literature on causality (see Schaffer, 2016) and on the social science literature on what explanations people seek (see Miller, 2019).",
"Following this scholarship, we conceptualize a causal or motivational question as asking what root cause led the event or state from the story to happen rather than some alternative outcome.",
"For example, in a story about Rover the dog, the question of why Rover came inside is taken to mean: Why did Rover come inside, rather than remaining where he was?",
"4 The answer to such a question is a CAUSAL CHAIN tracing from the root cause to the event or state described in the story (see Figure 2 for examples).",
"The links in the chain walk in lockstep through two parallel worlds: the REALIZED WORLD , where the root cause held true and led to the observed outcome; and an ALTERNATIVE WORLD , where the root cause would have been changed and led to some alternative outcome.",
"For mechanistic causation, each link in the chain ends in an event that helped bring about the outcome described in the story.",
"For example, two mechanistic links from Figure 2a are the plant looks brown (rather than green) because it is unhealthy (rather than healthy) and the plant is unhealthy because it has little light (rather than lots of light) .",
"For motivations, the structure is slightly different.",
"Rather than the final link being an event that happened in the story, it is a statement of the agent's preferences (in Figure 2b, Rover would prefer not being rained on to being rained on ).",
"The links leading to it are the future causes and effects that the agent imagines will lead from their action to their preferred outcome (e.g., going inside leading to being inside leading to not getting rained on).",
"The causal chain provides the backbone of an explanation for an event or action, but the full explanation should recursively explain each link (e.g., Rover would prefer not being rained on to being rained on ).",
"Recursive explanations appeal to some combination of general knowledge about the world (e.g., Most dogs prefer not to get rained on ) and 4 Causality as contrast may seem unintuitive, particularly since why questions tend not to state a contrasting outcome.",
"But the audience generally just infers a reasonable default.",
"Beyond its support in the literature, contrast offers several advantages.",
"It makes it far easier to match intuitions about what should factor into a causal explanation.",
"It also naturally handles relative preferences, and allows explaining multiple aspects of an evente.g., John walking carefully can be explained in contrast to both staying put and walking normally.",
"Even with guidelines, different annotators may give substantively different answers.",
"In particular, they may drill down to different levels of detail in a causal chain before bottoming out in general knowl-edgee.g., rather than stopping at dogs disliking rain, one annotator might explain that Rover dis-prefers rain because he dislikes getting wet, which in turn is because dogs often dislike getting wet.",
"To handle such disagreements, we can adopt the pyramid method (Nenkova and Passonneau, 2004) from abstractive summarization, another task where annotators may provide different but equally sensible ground truths.",
"Under this method, a reconciler merges RoUs into a single rubric by identifying shared content nuggets (e.g., that it is raining) and weighting each by how many annotators cited it.",
"(See Voorhees [2004] for more on nuggets.) 4.1.1 Preliminary notes on RoU agreement We conducted a small pilot study on RoU annotation: with the help of 5 annotators, we iteratively crafted guidelines and tested them on 12 stories.",
"Here we share some initial qualitative observations.",
"For spatial annotations, agreement improved when annotators first drew a simple sketch of each scene, then translated their sketches into statements.",
"This process seemed to help annotators notice implicit spatial facts.",
"Some annotators also reported that sketches lowered the cognitive burden.",
"For temporal annotations, annotators generally agreed on what events took place and the temporal relations between them.",
"Disagreements stemmed mainly from choices of which implicit occurrences to annotate.",
"We are exploring ways to promote consistency, including having annotators draw timelines to draw attention to missing events.",
"We are also looking to incorporate prior art (e.g., TimeML; Pustejovsky et al., 2003) into our guidelines.",
"On causal and motivational questions, we were pleasantly surprised by the conceptual consistency between annotators.",
"Annotators appealed to similar causal assertions, even bottoming out in similarly detailed general rules.",
"What was less consistent was structurehow causal chains were carved into links and how bullets were nested.",
"Annotators also occasionally omitted self-evident general rules or supporting facts.",
"We are optimistic that both issues can be improved by more examples and training.",
"(a) A mechanistic causal chain for the question, Why did the plant turn brown?",
"which causal contrasts to include.",
"Such borderline judgments of salience may be inevitable, and seem to warrant use of the pyramid method.",
"It is difficult to evaluate a system directly on an RoU or a rubric, as they are written in plain English.",
"One option is to pose broad ToU questions (e.g., What events happened and in what order?) and then to automatically compare systems' full free-text answers to annotators'.",
"But this would require an automated comparison metric, and existing metrics such as ROUGE and BLEU are concerned only with lexical similarity.",
"Their correlation with humans' quality judgments is substantial but not stellar (Callison-Burch et al., 2006), and high scores do not always indicate good answers in MRC (see Yang et al., 2018a; Nema and Khapra, 2018).",
"Su-perficial similarity measures may prove particularly weak given how open-ended ToU questions are.",
"Alternatively, human evaluators could read both the RoU-derived rubric and the system output and decide whether the output adequately covers each nugget from the rubric.",
"This is how the pyramid method is typically applied in summarization.",
"Still a third possibility is to have human evaluators ask targeted questions about each nugget from the rubric.",
"The evaluators could then judge whether the system's shorter free-text answers reflect a consistent understanding of that nugget.",
"Such evaluation would be especially powerful if the evaluators knew the NLP systems' typical shortcuts and could reword a given question accordingly: a suspicious evaluator could query for the same fact in multiple ways to verify that the system consistently gets it right.",
"This would make results more satisfying than many MRC evaluations, as systems couldn't rely on terse answers being interpreted charitably.",
"Of course, using humans for the final evaluation is expensive, even if automated metrics are used during model development.",
"Human evaluators also add variability and subjectivity, as they may probe differently for the same knowledge or find a given answer more or less convincing.",
"Still, new tasks often start with human evaluation while the community fine-tunes what is worth measuring, and only later to progress to automated metrics that approximate human judgment.",
"Such were the trajectories of topic model coherence (see Lau et al., 2014), summarization (see Yang et al., 2016), and machine translation (see Papineni et al., 2002), so it is a plausible pathway for RoU evaluation, too.",
"Free-response is a compelling format that is tricky to evaluate.",
"Multiple-choice inverts the trade-off: it is less compelling, but much easier to evaluate.",
"With the help of the ToU, a multiple-choice (MC) test can be fairly comprehensive.",
"Question writers would first write out RoUs for a story, and perhaps reconcile them into a weighted rubric.",
"They would then write MC questions targeting each nugget in the rubric: What goal is Rover pursuing by running inside rather than staying put?",
"Where was Rover after he ran through the door?",
"How were Rover, the house, and the rain positioned at the end of the story?",
"Etc.",
"Such a thorough MC test based on RoUs would be a step up from current tasks.",
"The downside of an MC task is that, though easy to evaluate, it would be questionable as a measure of comprehension.",
"All MC tasks suffer from the same lack of naturalness: questions do not normally come with candidate answers, and ranking candidates is simply easier than the tasks MRC should ultimately support.",
"Furthermore, systems learn to exploit incidental surface features in the question, sometimes performing well even without seeing the passage (Kaushik and Lipton, 2018).",
"When humans take MC tests, we can make strong assumptions about what they must know or do to succeed; an NLP system offers no such assurances.",
"In the long run, then, we do not see multiple choice as an adequate format for demonstrating MRC.",
"Still, such tests offer some leverage for progress in the short term.",
"The RoU guidelines put a stake in the ground as to how ToU questions should be answered.",
"But as noted above, ToU questions, particularly why questions, admit many good answers.",
"The ones canonicalized by the guidelines and by annotators following them may not always be the most useful.",
"Consequently, it may prove beneficial to appeal directly to human intuition about what understanding entails.",
"We have assumed that what lets humans perform story-related tasks is that they possess some internal answers to the ToU.",
"If we further assume that humans can be led to favor machine answers that resemble their own internal ones, then humans should make good judges of answer quality even without the guidance of RoUs.",
"Accordingly, we could let humans judge system's full free-text answers based only on intuitive preferences.",
"Evaluators could still be guided to ask ToU questions thoroughly, but extensive guidelines would not be needed: neither asking questions nor recognizing good answers demands nearly as much specification as stating canonical answers.",
"Whereas the approaches in 4.1 must strive for replicability in humans' answers, this approach seeks replicability only in humans' judgments of answers.",
"We suggest two ways to achieve this.",
"First, in the absence of a rubric, we suspect that answers would best be judged via pairwise comparisons.",
"For free-text writing, humans generally find comparative assessment easier than absolute scoring (Pollitt, 2012), and comparison is already used to evaluate natural-language generation (see, e.g., Yatskar et al., 2014).",
"Comparisons also mitigate the difficulty of spotting errors of omission: when evaluators see an incomplete answer in isolation, they may gloss over or mentally fill in what was left unsaid.",
"Comparing against a more complete competing answer makes it easier to notice gaps.",
"Second, evaluators can be guided to tease apart their judgments into several desirable dimensions of explanationse.g., accuracy, depth, and coher-encejust as is often done for natural language generation.",
"Pilot studies would be required to re-fine the dimensions and their specifications.",
"To test existing systems, the questions must be presented in a form the systems can handle.",
"Many systems were designed for span extraction, but the ToU does not lend itself to answering with text spans.",
"Instead, we report on experiments with a pilot version of the MC task described in 4.1.3.",
"To construct the test, we selected the first two narrative stories in the dev set of RACE (Lai et al., 2017).",
"Based on our preliminary annotation guidelines, one annotator read both stories, drafted an RoU for each, and wrote a question for each statement in the rough RoUs.",
"The annotator then collaborated with several others to write distractor answers, each characterized by one or more of the following: small surface variations on the correct answer that change the meaning; language from the passage, especially words that appear near words from the question; and language that might plausibly collocate with words from the question.",
"As an additional test for robustness, questions came in variant groups: each question was paired with a variant, or occasionally more than one, that asks for the same information in a different way (see Figure 3).",
"The distractors were often altered as well.",
"We then evaluated accuracy in two ways: counting each question independently and counting each variant group as one unit.",
"In the latter method, the group is marked correct only if both variants were answered correctly.",
"This simulates a suspicious evaluator re-asking the question and deducting points if the model does not consistently exhibit the desired understanding.",
"The resulting dataset contains a total of 201 questions (98 variant groups).",
"29% are spatial or temporal; the remaining 71% are causal or motivational.",
"The questions average 5.1 options, with a minimum of 4. (Including many distractors somewhat Q) What actually happened when Mr. Green and the man drove together?",
"A) They came to a small house.",
"B) They came to a hotel.",
"C) They traveled around the country.",
"D) They stopped several times at the side of the road.",
"Q') How did the man's directions actually turn out?",
"A) The directions the man gave led to where the man wanted to go.",
"B) The directions the man gave led to where Mr. Green wanted to go.",
"C) The directions Mr. Green gave led to where the man wanted to go.",
"D) The directions Mr. Green gave led to where Mr. Green wanted to go.",
"mitigates the weaknesses of the MC",
"format.) All questions are included in the supplementary materials; Appendix B shows many examples.",
"For validation, the questions were presented to two colleagues with non-technical degrees.",
"They scored 96% and 91% (measured on variant groups), suggesting that motivated, well-educated humans have little trouble with our questions.",
"Finally, we put the questions to XLNet (Yang et al., 2019), 5 a large, transformer-based language model trained with generalized autoregression on BooksCorpus and English Wikipedia.",
"After fine-tuning, the model achieves 81.75% on the original RACE task (within 5 points of the best non-ensemble model at the time of the experiments).",
"Our results (Table 1) show that XLNet performs poorly.",
"On individual questions, it scores just 37%, closing less than a third of the gap between chance and human performance.",
"This strongly suggests that whatever XLNet is doing, it is not learning the ToU's crucial elements of world understanding.",
"Furthermore, the system's performance is brittle, with many correct answers attributable to luck and/or unreliable cues: when moving from questions to variant groups, human performance falls just 3 points.",
"XLNet's performance, on the other 5 For questions with more than four answers, we split the answers across multiple sub-questions, all of whose answer sets contained the correct answer.",
"We counted the question correct only if that answer was chosen across all answer sets.",
"Chance performance was adjusted accordingly.",
"hand, falls 17 points, which leaves the system closing just 18% of the",
"chance-vs.-human gap.",
"Although we tested only XLNet, all the other models that currently dominate the leaderboards are similar pre-trained language models; none has any distinguishing characteristic that might be expected to produce dramatically better results on our dataset.",
"Likewise, no existing dataset is so much more systematic than RACE that fine-tuning on it should dramatically improve results on our dataset.",
"Especially given that multiple-choice tests are artificially easy for systems (see 4.1.3), our pilot experiment offers strong evidence that existing MRC systems do not succeed on the ToU.",
"Our ToU for stories is a first attempt at defining what MRC systems should comprehend in a principled, systematic way.",
"Drawing on work in psychology, philosophy, and pedagogy, we have argued for the ToU as a minimal standard and a valuable target for MRC.",
"We have also shown it to be beyond the reach of current systems.",
"We therefore suggest that the NLP community further build on our ToU.",
"This includes refining and perhaps expanding the questions; better defining the answers and evaluation procedures; building MRC corpora based on the ToU; and developing better-performing systems.",
"We ourselves are working on all four, and we welcome collaboration.",
"But even beyond our ToU, the broader point stands: existing MRC approaches are not satisfactorily testing for a systematic set of content.",
"Our efforts demonstrate that it is possible, with a sufficiently interdisciplinary approach, to define a plausible floor for comprehension for a given class of applications.",
"If MRC is to achieve its ultimate goals, wethe NLP communityowe it to ourselves to ensure that our reading comprehension tests actually test for the comprehension we desire."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"The design of expressive representations of entities and relations in a knowledge graph is an important endeavor.",
"While many of the existing approaches have primarily focused on learning from relational patterns and structural information, the intrinsic complexity of KG entities has been more or less overlooked.",
"More concretely, we hypothesize KG entities may be more complex than we think, i.e., an entity may wear many hats and relational triplets may form due to more than a single reason.",
"To this end, this paper proposes to learn disentangled representations of KG entities a new method that disentangles the inner latent properties of KG entities.",
"Our disentangled process operates at the graph level and a neighborhood mechanism is leveraged to disentangle the hidden properties of each entity.",
"This disentangled representation learning approach is model agnostic and compatible with canonical KG embedding approaches.",
"We conduct extensive experiments on several benchmark datasets, equipping a variety of models (DistMult, SimplE, and QuatE) with our proposed disentangling mechanism.",
"Experimental results demonstrate that our proposed approach substantially improves performance on key metrics.",
"Knowledge graphs (KG) have emerged as a compelling abstraction for organizing structured knowledge.",
"They have been playing crucial roles in many machine learning tasks.",
"A knowledge graph represents a collection of linked data, describing entities of interest and relationships between them.",
"To incorporate KGs into other machine learning systems, a prevalent way is mapping entities and relations of knowledge graphs into expressive representations in a low-dimensional space that preserves the relationships among objects, also known as knowledge graph embeddings.",
"Representative work such as (Bordes et al., 2013; Wang et al., 2014; Yang et al., 2014; Sun et al., 2019; Zhang et al., 2019; Chami et al., 2020) has gained intensive attention across the recent years.",
"The substantial effectiveness of recent work can be attributed to relational pattern modeling in which a suitable relational inductive bias is used to fit the structural information in data.",
"Nevertheless, these methods ignore the fact that the origination and formation of KGs can be rather complex (Ehrlinger and W, 2016).",
"They may be collected, mined, handcrafted or merged in a complicated or convoluted process (Ji et al., 2017; Bosse-lut et al., 2019; Qin et al., 2018).",
"To this end, entities in a knowledge graph may be highly entangled and relational triplets may form and be constructed for various reasons under a plethora of different circumstances or contexts.",
"Contextual reasons and/or domains may be taken into account at the same time.",
"As such, it is only natural that KG embedding methods trained in this fashion would result in highly entangled latent factors.",
"Moreover, the existing holistic approaches fail to disentangle such factors and may result in sub-optimal solutions.",
"Recently, disentangled representation learning has achieved state-of-the-art performance and attracts much attention in the field of visual representation learning.",
"A disentangled representation should separate the distinct, informative factors of variations in the data (Bengio et al., 2013).",
"Disentangling the latent factors hidden in the observed data can not only increase the robustness, making the model less sensitive to misleading correlations but also enhance the model explainability.",
"Disentanglement can be achieved using either supervised signals or unsupervised approaches.",
"Zhu et al. (Zhu et al., 2014) propose to untangle the identity and view features in a supervised face recognition task.",
"A bilinear model is adopted in (Tenenbaum and Freeman, 2000) to separate content from styles.",
"There is also a large body of work on unsupervised disentangled representation learning (Chen et al., 2016; Denton et al., 2017; Higgins et al., 2016).",
"Generally, the disentanglement mechanism is integrated into unsupervised learning frameworks such as variational autoencoders (Kingma and Welling, 2013) and generative adversarial networks (Good-fellow et al., 2014).",
"The quality of unsupervised disentangled representation can even match that learned from supervised label signals.",
"Inspired by the success of disentangled representation learning, we seek to enhance the disentanglement capability of entities representation in knowledge graphs.",
"Our hope is that this idea can address the aforementioned challenge in learning entity embeddings, that is, enabling the entities embeddings to better reflect the their inner properties.",
"Unlike learning disentangled representations in visual data, it is more challenging to disentangle the discrete relational data.",
"Most KGs embedding approaches operate at the triplet level, which is uninformative for disentanglement.",
"Intuitively, information about the entities resides largely within the graph encoded through neighborhood structures.",
"Our assumption is that an entity connects with a certain group of entities for a certain reason.",
"For example, Tim Robbins, as an actor, starred in films such as The Shawshank Redemption ; as a musician, is a member of the folk music group The Highwaymen .",
"We believe that relational triplets form because of different factors and this can be disentangled when looking it at the graph level.",
"To summarize, our key contributions are: (1) We propose Knowledge Router (KR), an approach that learns disentangled representations for entities in knowledge graphs.",
"Specifically, a neighbourhood routing mechanism disentangles the hidden factors of entities from interactions with their neighbors.",
"(2) Knowledge Router is model agnostic, which means that it can play with different canonical knowledge graph embedding approaches.",
"It enables those models to have the capability in learning disentangled entity representations without incurring additional free parameters.",
"(3) We conduct extensive experiments on four publicly available datasets to demonstrate the effectiveness of Knowledge Router.",
"We apply Knowledge Router to models such as DistMult, SimplE, and QuatE and observe a notable performance enhancement.",
"We also conduct model analysis to inspect the inner workings of Knowledge Router.",
"Learning representations from data is the key challenge in many machine learning tasks.",
"The primary posit of disentangled representation learning is that disentangling the underlying structure of data into disjoint parts could bring advantages.",
"Recently, there is a growing interest in learning disentangled representations across various applications.",
"A trending line of work is integrating disentanglement into generative models.",
"(Tran et al., 2017) propose a disentangled generative adversarial network for face recognition and synthesis.",
"The learned representation is explicitly disentangled from a pose variation to make it pose-invariant, which is critical for face recognition/synthesis task.",
"(Denton et al., 2017) present a disentangled representation learning approach for videos.",
"The proposed approach separates each frame into a time-independent component and a temporal dynamics aware component.",
"As such, it can reflect both the time-invariant and temporal features of a video.",
"(Ma et al., 2018) propose a disentangled generative model for personal image generation.",
"It separates out the foreground, background, and pose information, and offers a mechanism to manipulate these three components as well as control the generated images.",
"Some works (Higgins et al., 2016; Burgess et al., 2018) (e.g., -VAE) integrate disentanglement mechanism with variational autoencoder, a probabilistic generative model.",
"-VAE uses a regularization coefficient to constrain the capacity of the latent information channel.",
"This simple modi-fication enables latent representations to be more factorised.",
"Drawing inspiration from the vision community, learning disentangled representations has also been investigated in areas such as natural language processing and graph analysis.",
"(Jain et al., 2018) propose an autoencoders architecture to disentangle the populations, interventions, and outcomes in biomedical texts.",
"(Liu et al., 2019) propose a prism module for semantic disentanglement in named entity recognition.",
"The prism module can be easily trained with downstream tasks to enhance performance.",
"For graph analysis, (Ma et al., 2019a) propose to untangle the node representation of graph-structured data in graph neural networks.",
"(Ma et al., 2019b) present a disentangled variational autoen-coder to disentangle the user's diverse interests for recommender systems.",
"Learning effective representations for knowledge graphs is extensively studied because of its importance in downstream tasks such as knowledge graph completion, natural language understanding, web search, and recommender systems.",
"Among the large body of related literature, two popular lines are translational approaches and semantic matching approaches.",
"The groundbreaking TransE (Bor-des et al., 2013) sets the fundamental paradigm for translational models.",
"Typically, the aim is to reduce the distance between translated (by relation) head entity and tail entity.",
"Successors such as TransH (Wang et al., 2014), TransR (Lin et al., 2015) all follow this translational pattern.",
"Semantic matching methods calculate the semantic similarities between entities.",
"A representative semantic model is DistMult (Yang et al., 2014) which measures the plausibility of triplets with vector multiplications.",
"To model more complex relation patterns, (Trouillon et al., 2016; Zhang et al., 2019; Sun et al., 2019; Zhang et al., 2021) extend the embedding spaces to complex number space or hyperbolic space.",
"A fully expressive model named SimplE (Kazemi and Poole, 2018) could achieve the same level of capability of ComplEx (Trouillon et al., 2016) with lower calculation cost.",
"Inspired by the success of disentangled representations, we explore methods to factorize different components/aspects of entangled entities in a knowledge graph.",
"To the best of our knowledge, our work is one of the first efforts to induce disentangled representations in knowledge graphs.",
"Our disentangled embedding algorithm can be easily integrated into existing knowledge graph embedding models (model agnostic).",
"Suppose we have an entity set E and a relation set R , where |E| = N and |R| = M .",
"A knowledge graph G = ( E , R ) is made up of a collection of facts F in triplet form ( h, r, t ) , where h, t E and r R .",
"The triplet ( h, r, t ) F means that entities h and r are connected via a relation r .",
"The facts are usually directional, which means exchanging the head entity and tail entity does not necessarily result in a legitimate fact.",
"We are concerned with the link prediction task.",
"The goal is to embed the entities and relations of a knowledge graph into low-dimensional rep-Notation Description E Entity set.",
"resentations that can preserve the facts in the graph.",
"A classical setting is using an embedding matrix E RN d to represent all the entities and an embedding matrix W RM d to represent all the relations.",
"Instead of directly modeling triplet facts, we propose to disentangle the entities with their neighbors in a message passing setting.",
"The neighborhood entities could form several clusters for different reasons and the entity is updated by the information accepted from its neighborhood clusters.",
"Figure 1 illustrates the overall process of Knowledge Router.",
"It consists of two stages: (1) disentangling the entities from a graph perspective using neighbourhood routing; (2) scoring the facts using relations and the disentangled entities representations.",
"Let us build an undirected graph from the training data.",
"The relations are anonymized, which means we do not need to know under which conditions two entities are linked.",
"We denote the neighbourhood of entity e as N ( e ) , regardless of the relations.",
"Our neighborhood routing approach operates on this graph.",
"Given an entity e , we aim to learn a disentangled embedding that encodes various attributes of the entity.",
"In this regard, we suppose that each entity is composed of K independent components, with each component denoted by p e,k R dK , where k = 1 , 2 , ..., K .",
"Each component stands for one aspect of the entity, e.g., a role of a person.",
"A Figure 1: The overall procedure of the proposed Knowledge Router algorithm for learning disentangled entity representations.",
"major challenge here is to make the learned K components to be independent of one another so that different facets can be separately encoded.",
"To this end, we adopt routing mechanisms that are inspired by capsule networks (Hinton et al., 2011).",
"Specifically, we aim to learn the K components from both the entity e and its neighbourhoods N ( e ) .",
"Next, we describe this procedure in detail.",
"For each entity e , we first initialize the E e randomly and evenly split it into K parts.",
"The k th part is denoted by x e,k R dK .",
"By doing so, the embedding is projected into different subspaces.",
"To ensure computation stability, each part is also normalized as follows: x e,k = x e,k (cid:107) x e,k (cid:107) 2 (1) This is used for the initialization of p e,k .",
"Obviously, the information contained is limited and it cannot reach the goal of disentanglement.",
"To enrich the information, we use a graph message passing mechanism and define the update rule for the k th component of p e as follows: p e,k = x e,k + AGGREGATE ( { x i,k , i N ( e ) } ) , (2) where AGGREGATE represents the neighborhood aggregation function (defined in equation 5).",
"The same (cid:96) 2 normalization as (1) is applied to p e,k afterwards.",
"In this way, p e,k contains information from the k th aspect of both entity e and all of its neighbors.",
"Common aggregating functions such as mean pooling and sum pooling are viable, but treating each neighbor equally when determining one component of the representation is undoubtedly not sensible.",
"As such, an attention mechanism is used to obtain weights for each neighbor.",
"In particular, a scaled dot-product attention method is applied.",
"We first get the dot product between p e,k and x i,k , i N ( e ) .",
"For each k , we get the following similarity score: s e,i,k = p (cid:62) e,k x i,k (cid:112) d/k , (3) which provides information on how entity e interacts with its neighbour entity i pertaining to the aspect k .",
"Then the softmax function is applied to get the weight distribution over different components for each neighbour.",
"Now, we formulate the definition of the AGGREGATE function as follows: AGGREGATE ( { x i,k , i N ( e ) } ) := (cid:88) i N ( e ) w i,k x i,k (5) The above process, including equations (2), (3), (4), (5) for learning p e,k , k = 1 , 2 , ..., K , is repeated for T iterations, which is the same as that of a routing mechanism.",
"Like capsule networks (Sabour et al., 2017), we also assume that entity (object) is composed of entity (object) parts.",
"This routing method enables it to model part-whole relationships and enlarge the differences between parts after several routing iterations.",
"Afterwards, the concatenation of all K components of an entity is used to represent that entity.",
"That is, the disentangled representation p e of the entity e is defined as: p e = [ p e, 1 , p e, 2 , ..., p e,K ] (6) This neighborhood routing algorithm is model agnostic as our aim is to learn an entity embedding matrix which is necessary for most knowledge graph embedding methods.",
"It is worth noting that this model will not introduce additional free parameters to the model.",
"The intuition behind the routing mechanism is that each facet in an entity has a separate route to contribute to the meaning of this entity.",
"The routing algorithm will coordinately infer p e,k (we can view it as the center of each cluster) and w i,k (the probability that factor k is the reason why entity e is connected with entity i ).",
"They are coordinately learned and under the constraint that each neighbor should belong to one cluster.",
"It is reminiscent of the iterative method used in the EM algorithm (Bishop, 2006) and is expected to lead to convergence and meaningful disentangled representations (Ma et al., 2019a).",
"Until now, the relation embeddings are not utilized as all relations are anonymous during graph construction.",
"This algorithm will be jointly trained with the following facts scoring algorithms.",
"Using disentangled entity embeddings alone cannot recover the facts in a knowledge graph.",
"It shall be further updated simultaneously with the relation embeddings for the fact scoring process.",
"To predict whether a triplet (cid:104) h, r, t (cid:105) holds or not, we first fetch the learned disentangled representation of the head and tail entities, p h and p t .",
"Then we adopt three methods for triplet scoring including DistMult (Yang et al., 2014), SimplE (Kazemi and Poole, 2018), and QuatE (Zhang et al., 2019).",
"We denote the model after disentanglement as: KR-DistMult, KR-SimplE, and KR-QuatE.",
"( h, r, t ) = (cid:104) W r , p h , p t (cid:105)",
"SimplE needs an additional entity embedding matrix H RN d and an additional relation embedding matrix V RM d .",
"We perform the same disentanglement process on H and denote the disentangled representation of entity e as q e , the scoring function of KR-SimplE (SimplE-avg is adopted since it outperforms SimplE-ignr) is: ( h, r, t ) = ( (cid:104) W r , p h , q t (cid:105) + (cid:104) V r , q h , p t (cid:105) ) 1 2 (8) For QuatE, entities and relations are represented with quaternions.",
"Each quaternion is composed of a real component and three imaginary components.",
"Let Q HN d denote the quaternion entity embedding and W HM d denote the quaternion relation embedding, where H is the quaternion space.",
"Each entity is represented by Q e .",
"We apply the Knowledge Router algorithm on each component of Q e .",
"The scoring function of KR-QuatE is: ( h, r, t ) = Q KR h W r | W r | Q KR t (9) where \" is Hamilton product; \" represents the quaternion inner product; Q KR denotes the entity representation after disentanglement.",
"As Knowledge Router is model agnostic, other scoring functions are also applicable.",
"To learn a disentangled KG model, we adopt the following negative log-likelihood loss:",
"where S is the number of training samples (triplets); y ( i ) is a binary label indicating whether the i th triplet holds or not; ( i ) is the prediction for the i th triplet.",
"Our model can be trained with commonly used minibatch gradient descent optimizers.",
"The disentanglement process of each node needs O ( |N ( e ) | dK K + T ( |N ( e ) | dK K + dK K )) time complexity, where |N ( e ) | is neighborhood size.",
"After simplification, the time complexity is O ( T |N ( e ) | d ) .",
"This will not incur a high computational cost since T is usually a small number (e.g., 3), and the neighborhood size is determined by the average degree and can usually be constrainted by a constant value (e.g., 10).",
"With regard to fact Datasets N M | train | | validation | | test | FB15k-237 14,541 237 272,115 17,535 20,466 WIKIDATA 11,153 96 53,252 11,894 11,752 ICEWS14 7,128 230 42,690 7,331 7,419 ICEWS05-15 10,488 251 368,962 46,275 46,092 Table 2: Statistics of datasets used in our experiments.",
"In this section, we conduct experiments on several benchmark datasets to verify the effectiveness of the proposed approach.",
"We target at answering: RQ I : whether the disentanglement method can enhance the traditional knowledge graph embedding methods?",
"RQ II : Model-agnosticism: can it effectively work with different baseline models?",
"RQ III : How do certain important hyper-parameters impact the model performance and what has the disentanglement algorithm learned?",
"Are they meaningful?",
"We use four publicly available datasets including ICEWS14, ICEWS05-15, WikiData, and FB15k-237.",
"The reason for using these is that their entities are complicated and highly entangled.",
"The WordNet dataset is not appropriate to evaluate the proposed method as the entities in WordNet are already disentangled 1 .",
"FB15k-237 is a subset of the Freebase knowledge base which contains general information about the world.",
"We adopt the widely used version generated by (Dettmers et al., 2018) where inverse relations are eliminated to avoid data leakage.",
"WikiData is sampled from Wikidata 2 , a collaborative open knowledge base.",
"The knowledge is relatively up-to-date compared with FB15k-237.",
"We use the version provided by (Garca-Durn et al., 2018).",
"Timestamp is discarded.",
"ICEWS (Garca-Durn et al., 2018) is collected from the integrated crisis early warning system 3 which was built to monitor and forecast national and internal crises.",
"The datasets contain political events that connect entities (e.g., countries, presidents, intergovernmental organizations) to other entities via predicates (e.g., make a visit\", sign formal agreement\", etc.).",
"ICES14 contains events in the year 2014, while the ICEWS05-15 contains 1 For example, a word with five meanings is represented with five different entities in WordNet.",
"We adopt four commonly used evaluation metrics including hit rate with given cut-off (HR@1, HR@3, HR@10) and mean reciprocal rank (MRR).",
"HR measures the percentage of true triples of the ranked list.",
"MRR is the average of the mean rank inverse which reflects the ranking quality.",
"Evaluation is performed under the commonly used filtered setting (Bordes et al., 2013), which is more reasonable and stable compared to the unfiltered setting.",
"To demonstrate the advantage of our approach, we compare the proposed method with several representative knowledge graph embedding approaches including TransE (Bordes et al., 2013), DistMult (Yang et al., 2014), ComplEx (Trouillon et al., 2016), SimplE (Kazemi and Poole, 2018), and QuatE (Zhang et al., 2019).",
"For FB15k-237, the results of RotatE (Sun et al., 2019) and R-GCN (Schlichtkrull et al., 2018) are also included.",
"We implement our model using pytorch (Paszke et al., 2019) and run it on TITAN XP GPUs.",
"We adopt Adam optimizer to learn our model (Good-fellow et al., 2016) and the learning rate is set to 0 .",
"01 without further tuning.",
"The embedding size d is set to 100 and the number of negative samples is fixed to 50 .",
"The batch size is selected from { 128 , 512 , 1024 } .",
"The regularization rate is searched from { 0 .",
"0 , 0 .",
"01 , 0 .",
"1 , 0 .",
"2 , 0 .",
"3 , 0 .",
"5 } .",
"For the disentanglement algorithm, the number of components K is selected from { 2 , 4 , 5 , 10 } ( K should be divisible by d ); the number of routing iterations T is tuned amongst { 2 , 3 , 4 , 5 , 7 , 10 } .",
"The hyper-parameters are determined by the validation set.",
"Each experiment runs five times and the average is reported.",
"For convenience of implementation, the maximum neighbor sizes are: 16 (FB15K-237), 4 (WikiData), 10 (ICEWS14), 16 (ICEWS05-15).",
"We apply zero padding to entities that have fewer neighbors.",
"The test results on the four datasets are shown in Tables 3, 4 and 5.",
"Evidently, we can make the Models FB15k-237 MRR HR@10 HR@3 HR@1 TransE 0.294 0.465 -DistMult 0.241 0.419 0.263 0.155 ComplEx 0.247 0.428 0.275 0.158 SimplE 0.229 0.379 0.252 0.153 R-GCN 0.249 0.417 0.264 0.151 RotatE (cid:63) 0.297 0.480 0.328 0.205 QuatE (cid:5) 0.311 0.495 0.342 0.221 KR-DistMult 0.275 0.450 0.302 0.190 KR-SimplE 0.273 0.438 0.298 0.190 KR-QuatE 0.322 0.507 0.356 0.228 KR-D vs. D +14.1% +7.4% +14.8% +22.6% KR-S vs. S +19.2% +15.5% +18.2% +24.2% KR-Q vs. Q +3.5% +2.4% +4.1% +3.2% Table 3: Results on the FB15K-237 dataset.",
"following observations: (1) Models with Knowledge Router outperform the counterparts without it by a large margin, confirming the effectiveness of Knowledge Router and assuring the benefits of learning disentangled representations.",
"This clearly answers our RQ I ; (2) On the four datasets, we observe a consistent enhancement of Knowledge Router on both traditional embedding models such as DistMult, SimplE, as well as hypercomplex number based model QuatE.",
"This is expected as our Knowledge Router is model agnostic ( RQ II ) and can be integrated to canonical knowledge embedding models.",
"(3) The model KR-QuatE is usually the best performer on all datasets, indicating the generalization capability of Knowledge Router in more complex embedding spaces.",
"recent translational model RotatE and the semantic matching model QuatE.",
"Models such as DistMult and SimplE are also outperformed by KR-DistMult and KR-SimplE.",
"In addition, it is good to note that the performance of each of the three KR-models is much higher than the graph convolutional networks based model, R-GCN.",
"This implies that simply/naively incorporating graph structures might not lead to good performance.",
"Knowledge Router also operates at the graph level, moreover, the neighborhood information is effectively utilized for disentanglement.",
"Similar trends are also observed on WikiData.",
"Interestingly, we find that the performance differences of the three KR-models are quite small on this dataset.",
"We hypothesize that the performance on this dataset has already been quite high, making further improvement more difficult.",
"Among the baselines, SimplE is the best performer.",
"We notice that even though the pure QuatE does not show impressive performance, the Knowledge Router enhances its results and enables it to achieve the state-of-the-art performance.",
"On the two ICEWS datasets, disentanglement usually leads to a large performance boost.",
"The average performance gains of Knowledge Router based models (KR-DistMult, KR-SimplE, KR-QuatE) are high, compared with the original models (DistMult, SimplE, and QuatE).",
"We also observe that KR-QuatE outperforms other models significantly.",
"To conclude, our experimental evidence shows that disentangling the entities can indeed bring performance increase and the proposed Knowledge Router can effectively be integrated into different models.",
"To answer RQ III and gain further insights, we empirically analyze the important ingredients of the model via qualitative analysis and visualization.",
"The attention mechanism is critical to achieving the final disentanglement.",
"To show its efficacy, we visualize four examples of attention weights w i,k in Figure",
"2. The color scale represents the strength of the attention weights.",
"Each row represents a neighbor of the selected entity and each column represents a disentangled component.",
"We observe a clear staggered pattern in the attention weights.",
"For example, in the upper left figure, the neighbors Models ICEWS 14 ICEWS05-15 MRR HR@10 HR@3 HR@1 MRR HR@10 HR@3 HR@1 TransE (cid:63) 0.280 0.637 -0.094 0.294 0.663 -0.090 DistMult (cid:63) 0.439 0.672 -0.323 0.456 0.691 -0.337 SimplE 0.458 0.687 0.516 0.341 0.478 0.708 0.539 0.359 ComplEx 0.638 0.753 0.677 0.574 0.708 0.821 0.748 0.645 QuatE 0.656 0.733 0.673 0.615 0.723 0.817 0.754 0.671 KR-DistMult 0.544 0.740 0.608 0.439 0.611 0.789 0.662 0.519 KR-SimplE 0.588 0.753 0.642 0.498 0.639 0.803 0.689 0.553 KR-QuatE 0.688 0.753 0.692 0.643 0.797 0.853 0.812 0.767 KR-DistMult vs. DistMult +23.9% +10.1% -+11.6% +33.9% +14.2% -+54.0% KR-SimplE vs. SimplE +28.3% +9.6% +24.4% +46.0% +33.7% +13.4% +27.8% +54.0% KR-QuatE vs. QuatE +4.9% +2.7% +2.8% +4.6% +10.2% +4.4% +7.7% +14.3% Table 5: Results on ICEWS14 and ICEWS05-15.",
"1 , 2 , 3 give higher weights to the second component while 0 gives a stronger weight to the first component.",
"In other figures, the attention weights are also staggered among the disentangled components.",
"We randomly pick one entity ( Michael Rensing , a German footballer) from the WikiData and show the learned weight between him and his neighborhood entities in Figure",
"3. We observe that FC Bayern Munich and Jan Kirchhoff (who is also a team member of the FC Bayern Munich club) contribute more on the first component of the representation of Michael Rensing , while Germany national under-18 football team and Germany national under-21 football team make larger contributions to the second component.",
"Clearly, the first component captures the fact that Michael Rensing is a member of the FC Bayern Munich association football club and the second component reflects that he is also a Figure 3: Case study on WikiData for the German footballer Michael Rensing .",
"Germany national football team member.",
"This case justifies our assumption that entities are connected for different reasons and demonstrates that Knowledge Router is able to disentangle the underlying factors effectively.",
"We analyze the impact of K .",
"Intuitively, K is dif-ficult to choose since there is no prior information on how many components we should decompose each entity into.",
"The test results with varying K on ICEWS14 of KR-QuatE are shown in Figure 4",
"(a).",
"As can be seen, using large K could result in a performance degradation.",
"One possible reason is that there are not enough neighborhood entities to be divided into 20 groups.",
"Empirically, we found that setting K to a small value around 2 to 5 can usually render reasonable results.",
"A practical suggestion is that K should not exceed the average degree of the knowledge graph.",
"We study the influence of number of routing iterations.",
"As shown in Figure 4",
"(b), the model performance is stable when using different iterations.",
"The reason is that the Knowledge Router algorithm is not prone to saturation and has good convergence properties.",
"In practice, we find that using a small number of iterations (e.g., 3) could lead to ideal enhancement without putting on much computation burden.",
"In this paper, we present Knowledge Router, an algorithm for learning disentangled entity representations in knowledge graphs.",
"Our method is model agnostic and can be applied to many canonical knowledge graph embedding methods.",
"Extensive experiments on four benchmarking datasets demonstrate that equipping popular embedding models with the proposed Knowledge Router can outperform a number of recent strong baselines.",
"Via qualitative model analysis, we discover that Knowledge Router can effectively learns the hidden factors connecting entities, thus leading to disentanglement.",
"We also showcase the impact of certain important hyper-parameters and give suggestions on hyper-parameters tuning."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"Deep NLP models have been shown to be brittle to input perturbations.",
"Recent work has shown that data augmentation using counterfactuals i.e. minimally perturbed inputs can help ameliorate this weakness.",
"We focus on the task of creating counterfactuals for question answering, which presents unique challenges related to world knowledge, semantic diversity, and answerability.",
"To address these challenges, we develop a R etrieveG enerate-F ilter (RGF) technique to create counterfactual evaluation and training data with minimal human supervision.",
"Using an open-domain QA framework and question generation model trained on original task data, we create counterfactuals that are fluent, semantically diverse, and automatically labeled.",
"Data augmentation with RGF counterfactuals improves performance on out-of-domain and challenging evaluation sets over and above existing methods, in both the reading comprehension and open-domain QA settings.",
"Moreover, we find that RGF data leads to significant improvements to robustness to local perturbations.",
"1 1 Introduction Models for natural language understanding (NLU) may outperform humans on standard benchmarks, yet still often perform poorly under a multitude of distributional shifts (Jia and Liang (2017); Naik et al. (2018); McCoy et al. (2019), inter alia ) due to over-reliance on spurious correlations or dataset artifacts.",
"This behavior can be probed using counterfactual data (Kaushik et al., 2020; Gardner et al., 2020) designed to simulate interventions on specific attributes: for example, perturbing the movie review A real stinker, one out of ten!\" to A real classic, ten out of ten!\" allows us to discern the Work performed during an internship at Google. 1 Code at https://github.com/ google-research/language/tree/master/language/qa_counterfactuals RETRIEVE ( REALM) GENERATEFILTER Wikipedia Who is the captain of the Richmond Football Club? List of Richmond Football Club captains >> Jeff Hogg 1994 -1996 ... Current Captain: Trent Cotchin The ... Richmond Football club ran a women's team and ... Jess Kennedy was named the team ' s captain Who captained Richmond Football Club's women's team? Who won the inaugural best in VFL 2018 season? Who captained Richmond Football Club's women's team? Figure 1: Retrieve-Generate-Filter to generate counterfactual queries for Natural Question (Kwiatkowski et al., 2019) using an open-domain retrieval system, question generation and post-hoc filtering. effect of adjective polarity on the model's prediction. Many recent works (Kaushik et al., 2020, 2021; Wu et al., 2021a; Geva et al., 2021, inter alia ) have shown that training augmented with this counterfactual data (CDA) improves out-of-domain generalization and robustness against spurious correlations. Consequently, several techniques have been proposed for the automatic generation of counterfactual data for several downstream tasks (Wu et al., 2021a; Ross et al., 2021b,a; Bitton et al., 2021; Geva et al., 2021; Asai and Hajishirzi, 2020; Mille et al., 2021). In this paper, we focus on counterfactual data for question answering, in both the reading comprehension and open-domain settings (e.g. Rajpurkar et al., 2016; Kwiatkowski et al., 2019). Model inputs consist of a question and optionally a context passage, and the target a is a short answer span. Counterfactuals are often considered in the context of a specific causal model (Miller, 2019; Halpern and Pearl, 2005), but in this work we follow Wu et al. (2021a) and Kaushik et al. (2020) and seek a method to generate counterfactuals that may be use-1670 ful in many different settings. In QA, the set of possible causal features is large and difficult to specify a priori ; relevant factors are often instance-specific and exploring them may require world knowledge. For example, going from Who is the captain of the Richmond Football Club to a perturbed question Who captained Richmond's women's team? as in Figure 1 requires knowledge about the club's alternate teams, and the perturbation Who was the captain of RFC in 1998? requires knowledge about the time-sensitive nature of the original question. In the absence of such knowledge, otherwise reasonable edits such as Who captained the club in 2050? can result in false premises or unanswerable questions. We develop a simple yet effective technique to address these challenges: R etrieve, G enerate, and F ilter (RGF; Figure 1). We use the near-misses of a retrieve-and-read QA model to propose alternate contexts and answers which are closely related to but semantically distinct from the original question. We then use a sequence-to-sequence question generation model (Alberti et al., 2019) to generate corresponding questions to these passages and answers. This results in fully-labeled examples, which can be used directly to augment training data or filtered post-hoc for analysis. While our method requires no supervised inputs besides the original task training data, it is able to generate highly diverse counterfactuals covering a range of semantic phenomena (4), including many transformation types which existing methods generate through heuristics (Dua et al., 2021), meaning representations (Ross et al., 2021b; Geva et al., 2021) or human generation (Bartolo et al., 2020; Gardner et al., 2020). Compared to alternative sources of synthetic data (5.1), training augmented with RGF data improves performance on a variety of settings (5.2, 5.3), including out-of-domain (Fisch et al., 2019) and contrast evaluation sets (Bartolo et al., 2020; Gardner et al., 2020), while maintaining in-domain accuracy. Additionally, we introduce a measure of pairwise consistency , and show that RGF significantly improves robustness to a range of local perturbations (6). 2 Related Work 2.1 Counterfactual Generation There has been considerable interest in developing challenge sets for NLU that evaluate models on a wide variety of counterfactual scenarios. Gardner et al. (2020); Khashabi et al. (2020); Kaushik et al. (2020); Ribeiro et al. (2020) use humans to create these perturbations, optionally in an adversarial setting against a particular model (Bartolo et al., 2020). However, these methods can be expensive and difficult to scale. This has led to an increased interest in creating automatic counterfactual data for evaluating out-of-distribution generalization (Bowman and Dahl, 2021) and for counterfactual data augmentation (Geva et al., 2021; Longpre et al., 2021). Some work focuses on using heuristics like swapping superlatives and nouns (Dua et al., 2021), changing gendered words (Webster et al., 2020), or targeting specific data splits (Finegan-Dollak and Verma, 2020). More recent work has focused on using meaning representation frameworks and structured control codes (Wu et al., 2021a), including grammar formalisms (Li et al., 2020), semantic role labeling (Ross et al., 2021b), structured image representations like scene graphs (Bitton et al., 2021), and query decompositions in multi-hop reasoning datasets (Geva et al., 2021). Ye et al. (2021) and Longpre et al. (2021) perturb contexts instead of questions by swapping out all mentions of a named entity. The change in label can be derived heuristically or requires a round of human re-labeling of the data. These may also be difficult to apply to tasks like Natural Questions (Kwiatkowski et al., 2019), where pre-defined schemas can have diffi-culty covering the range of semantic perturbations that may be of interest. 2.2 Data Augmentation Non-counterfactual data augmentation methods for QA, where the synthetic examples are not paired with the original data, have shown only weak improvements to robustness and out-of-domain generalization (Bartolo et al., 2021; Lewis et al., 2021). Counterfactual data augmentation is hypothesized to perform better, as exposing the model to minimal pairs should reduce spurious correlations and make the model more likely to learn the correct, causal features (Kaushik et al., 2020). However, Joshi and He (2021) find that methods that limit the structural and semantic space of perturbations can potentially hurt generalization to other types of transformations. This problem is exacerbated in the question answering scenario where there can be multiple semantic dimensions to edit. Our method attempts to address this by targeting a broad range 1671 of semantic phenomena, thus reducing the chance for the augmented model to overfit. 3 RGF: Counterfactuals for Information-seeking Queries We define a counterfactual example as an alternative input x (cid:48) which differs in some meaningful, controlled way from the original x , which in turn allows us to reason or teach the model about changes in the label (the outcome). For question-answering, we take as input triples ( q, c, a ) consisting of the question, context passage, and short answer, and produce counterfactual triples ( q (cid:48) , c (cid:48) , a (cid:48) ) where a (cid:48) (cid:54) = a . This setting poses some unique challenges, such as the need for background knowledge to identify relevant semantic variables to alter, ensuring sufficient semantic diversity in question edits, and avoiding questions with false premises or no viable answers. Ensuring (or characterizing) minimality can also be a challenge, as small changes to surface form can lead to significant semantic changes, and vice-versa. We introduce a general paradigm for data generation R etrieve, G enerate and F ilter to tackle these challenges. 3.1 Overview of RGF An outline of the RGF method is given in Figure 1. Given an input example x = ( q, c, a ) consisting of a question, a context paragraph, and the corresponding answer, RGF generates a set of new examples N ( x ) = { ( q (cid:48) 1 , c (cid:48) 1 , a (cid:48) 1 ) , ( q (cid:48) 2 , c (cid:48) 2 , a (cid:48) 2 ) , . . . } from the local neighborhood around x. We first use an open-domain retrieve-and-read model to retrieve alternate contexts c (cid:48) and answers a (cid:48) where a (cid:54) = a (cid:48) . As near-misses for a task model, these candidates ( c (cid:48) , a (cid:48) ) are closely related to the original target ( c, a ) but often differ along interesting, latent semantic dimensions (Figure 2) in their relation to the original question, context, and answer. We then use a sequence-to-sequence model to generate new questions q (cid:48) from the context and answer candidates ( c (cid:48) , a (cid:48) ) . This yields triples ( q (cid:48) , c (cid:48) , a (cid:48) ) which are fully labeled, avoiding the problem of unanswerable or false-premise questions. Compared to methods that rely on a curated set of minimal edits (e.g. Wu et al., 2021b; Ross et al., 2021b), our method admits the use of alternative contexts 2 c (cid:48) (cid:54) = c , and we do not explicitly constrain 2 An alternative approach would be to make direct, targeted edits to the original context c . However, beyond a limited space of local substitutions (Longpre et al., 2021; Ye et al., our triples to be minimal perturbations during the generation step. Instead, we use post-hoc filtering to reduce noise, select minimal candidates, or select for specific semantic phenomena based on the relation between q and q (cid:48) . This allows us to explore a significantly more diverse set of counterfactual questions q (cid:48) (C.1), capturing relations that may not be represented in the original context c . We describe each component of RGF below; additional implementation details are provided in Appendix A. 3.2 Retrieval We use REALM retrieve-and-read model of (Guu et al., 2020). REALM consists of a BERT-based bi-encoder for dense retrieval, a dense index of Wikipedia passages, and a BERT-based answer-span extraction model for reading comprehension, all fine-tuned on Natural Questions (NQ; Kwiatkowski et al., 2019). Given a question q , REALM outputs a ranked list of contexts and answers within those contexts: { ( c (cid:48) 1 , a (cid:48) 1 ) , ( c (cid:48) 2 , a (cid:48) 2 ) , . . . ( c (cid:48) k , a (cid:48) k ) } . These alternate contexts and answers provide relevant yet diverse background information to construct counterfactual questions. For instance, in Figure 1, the question Who is the captain of the Richmond Football Club\" with answer Trent Cotchin\" also returns other contexts with alternate answers like Jeff Hogg\" ( q (cid:48) = Who captained the team in 1994\" ), and Steve Morris\" ( q (cid:48) = Who captained the reserve team in the VFL league\" ). Retrieved contexts can also capture information about closely related or ambiguous entities. For instance, the question who wrote the treasure of the sierra madre\" retrieves passages about the original book Sierra Madre , its movie adaptation, and a battle fought in the Sierra de las Cruces mountains. This background knowledge allows us to perform contextual-ized counterfactual generation, without needing to specify a priori the type of perturbation or semantic dimension. To focus on label-transforming counterfactuals, we retain all ( c (cid:48) i , a (cid:48) i ) where a (cid:48) i does not match any of the gold answers a from the original NQ example. 3.3 Question Generation This component generates questions q (cid:48) that correspond to the answer-context pairs ( c (cid:48) , a (cid:48) ) . We use a T5 (Raffel et al., 2020) model fine-tuned 2021; Ross et al., 2021a) this is very difficult due to the need to model complex discourse and knowledge relations. 1672 on ( q, c, a ) triples from Natural Questions, using context passages as input with the answer marked with special tokens. We use the trained model to generate questions ( q (cid:48) 1 , q (cid:48) 2 , . . . q (cid:48) k ) for each of the the retrieved set of alternate contexts and answers, (( c (cid:48) 1 , a (cid:48) 1 ) , ( c (cid:48) 2 , a (cid:48) 2 ) , . . . ( c (cid:48) k , a (cid:48) k )) . For each ( c (cid:48) i , a (cid:48) i ) , we use beam decoding to generate 15 different questions q (cid:48) . We measure the fluency and correctness of generated questions in 4. 3.4 Filtering for Data Augmentation Noise Filtering The question generation model can be noisy, resulting in a question that cannot be answered given c (cid:48) or for which a (cid:48) is an incorrect answer. Round-trip consistency (Alberti et al., 2019; Fang et al., 2020) uses an existing QA model to answer the generated questions, ensuring that the predicted answer is consistent with the target answer provided to the question generator. We use an ensemble of six T5-based reading-comprehension ( ( q, c ) a ) models, trained on NQ using different random seeds (Appendix A), and keep any generated ( q (cid:48) , c (cid:48) , a (cid:48) ) triples where at least 5 of the 6 models agree on the answer. This discards about 5% of the generated data, although some noise still remains; see 4 for further discussion. Filtering for Minimality Unlike prior work on generating counterfactual perturbations, we do not explicitly control for the type of semantic shift or perturbation in the generated questions. Instead, we use post-hoc filtering over generated questions q (cid:48) to encourage minimality of perturbation. We define a filtering function f ( q, q (cid:48) ) that categorizes the semantic shift or perturbation in q (cid:48) with respect to q . One simple version of f is the word-level edit (Levenshtein) distance between q and q (cid:48) . After noise filtering, for each original ( q, c, a ) triple we select the generated ( q (cid:48) , c (cid:48) , a (cid:48) ) with the smallest non-zero word-edit distance between q and q (cid:48) such that a (cid:54) = a (cid:48) . We use this simple heuristic to create large-scale counterfactual training data for augmentation experiments (5). Over-generating potential counterfactuals based on latent dimensions identified in retrieval and using a simple filtering heuristic avoids biasing the model toward a narrow set of perturbation types (Joshi and He, 2021). 3.5 Semantic Filtering for Evaluation To better understand the types of counterfactuals generated by RGF, we can apply additional filters based on question meaning representations to cat-Question from NQ Original: who is the captain of richmond football club? Predicate: who is the captain of X? Reference Change CF1: who is the captain of richmond's vfl reserve team? Predicate: who is the captain of X? Predicate Change CF2: who wears number 9 for richmond football club? Predicate: who wears Y for X? Predicate and Reference Change CF3: who did graham negate in the grand final last year? Predicate: who did X negate in Y last year? Table 1: Categorization of generated questions based on QED decomposition. The original reference Richmond football Club\" changes in CF1 and CF3. Predicate Who is the captain\" changes in CF2 and CF3. egorize counterfactual ( q, q (cid:48) ) pairs for evaluation. Meaning representations provide a way to decompose a question into semantic units and categorize ( q, q (cid:48) ) based on which of these units are perturbed. In this work, we employ the QED formalism for explanations in question answering (Lamm et al., 2021). QED decompositions segment the question into a predicate template and a set of reference phrases. For example, the question Who is captain of richmond football club\" decomposes into one question reference richmond football club\" and the predicate Who is captain of X\" . A few example questions and their QED decompositions are illustrated in Table 1. We use these question decompositions to identify the relation between a counterfactual pair ( q, q (cid:48) ) . Concretely, we fine-tune a T5-based model on the QED dataset to perform explanation generation following the recipe of Lamm et al. (2021), and use this to identify predicates and references for the question from each ( q, c, a ) triple. We use exact match between strings to identify reference changes. As predicates can often differ slightly in phrasing ( who captained vs. who is captain ), we take a predicate match to be a prefix matching with more than 10 characters. For instance, Who is the captain of Richmond's first ever women's team?\" , Who is the captain of the Richmond Football Club\" have same predicates. We filter generated questions into three perturbation categories reference change, predicate change, or both. 4 Intrinsic Evaluation Following desiderata from Wu et al. (2021a) and Ross et al. (2021b), we evaluate our RGF data 1673 Player Specific Game outcome Who has won the women's single winbledon tennis tournament in 2018 Who won the women's singles Australian Open? Who won the women's doubles at Wimbledon 2015 How many games in Wimbledon final set tie break? Who won the Wimbledon women's singles title in 2016 Who won the runner's up in the women's singles at Wimbledon in 2018 Who did Serena Williams best in the Wimbledon finals 2015 GameType Misc. TournamentName Tournamentyear Locative Country what's the population of walnut grove minnesota? what's the population of walnut grove washington? what is the population of apple valley minnesota ? how many students at walnut grove secondary school ? how long has the walnut twig beetle been in california ? where is walnut grove located in minnesota ? what is the population of walnut grove bc ? TownName Population based StateName Misc Who won the men's singles at wimbledon?",
"along three measures: fluency , correctness , and directionality .",
"Fluency Fluency measures whether the generated text is grammatically correct and semantically meaningful.",
"Fluency is very high from RGF, as the generation step leverages a high-quality pretrained langauge model (T5).",
"We manually annotate a subset of 100 generated questions, and find that 96% of these are fluent.",
"Correctness Correctness measures if the generated question q (cid:48) and context, alternate answer pairs ( c (cid:48) , a (cid:48) ) are aligned, i.e. the question is answerable given context c (cid:48) and a (cid:48) is that answer.",
"We quantify correctness in the generated dataset by manually annotating a samples of 100 ( q (cid:48) , c (cid:48) , a (cid:48) ) triples (see Appendix B).",
"The proportion of noise varies from 30% before noise filtering and 25% after noise filtering using an ensemble of models (3.4).",
"Directionality/Semantic Diversity In Table 2, we show examples of semantic changes that occur in our data, including reference changes (50% of changes), predicate changes (30%), negations (1%), question expansions, disambiguations, and contractions (13%).",
"These cover many of the transformations found in prior work (Gardner et al., 2020; Ross et al., 2021b; Min et al., 2020b), but RGF is able to achieve these without the use of heuristic transformations or structured meaning representations.",
"As shown in Figure 2, the types of relations are semantically rich and cover attributes relevant to each particular instance that would be difficult to capture with a globally-specified schema.",
"Additional examples are shown in Figure 6. 1674 Exact Match (RC) Train Size NQ SQuAD TriviaQA HotpotQA BioASQ AQA AmbigQA Original NQ 90K 70.91 80.26 13.67 50.57 35.90 27.00 46.81 Ensemble 90K 71.29 80.50 13.86 50.57 36.90 27.80 46.90 Gold Agen-Qgen 90K + 90K 70.80 67.71 10.83 42.69 30.63 19.40 41.95 Rand.",
"Unlike many counterfactual generation methods, RGF natively creates fully-labeled ( q (cid:48) , c (cid:48) , a (cid:48) ) examples which can be used directly for counterfactual data augmentation (CDA).",
"We augment the original NQ training set with additional examples from RGF, shuffling all examples in training.",
"We explore two experimental settings, reading comprehension (5.2) and open-domain QA (5.3), and compare RGF-augmented models to those trained only on NQ, as well as to alternative baselines for synthetic data generation.",
"As described in Section 3.4, we use edit-distance based filtering to choose one generated ( q (cid:48) , c (cid:48) , a (cid:48) ) triple to augment for every original example, ( q, c, a ) .",
"3 Additional training details for all models and baselines are included in Appendix A. 5.1 Baselines In the abstract, our model for generating counterfactuals specifies a way of selecting contexts c (cid:48) from original questions, and answers a (cid:48) within those contexts, and a way of a generating questions q (cid:48) from them.",
"RGF uses a retrieval model to identify relevant contexts; here we experiment with two baselines that use alternate ways to select c (cid:48) .",
"We also compare to the ensemble of six reading comprehension models described in 3.4, with answers selected by majority vote.",
"Random Passage (Rand. Agen-Qgen) Here, c (cid:48) is a randomly chosen paragraph from the Wikipedia index, with no explicit relation with the original question.",
"This setting simulates generation from the original data distribution of Natural Questions.",
"To ensure that the random sampling of Wikipedia paragraphs has a similar distribution, we employ the learned passage selection model from Lewis 3 We don't see significant gains from adding more data beyond this; see Appendix C.3 et al. (2021), 4 .",
"This baseline corresponds to the model of Bartolo et al. (2021), which was applied to the SQuAD dataset (Rajpurkar et al., 2016); our version is trained on NQ and omits AdversarialQA.",
"Gold Context (Gold Agen-Qgen) Here, c (cid:48) is the passage c containing the original short answer a from the NQ training set.",
"This baseline specifically ablates the retrieval component of RGF, testing whether the use of alternate passages leads to more diversity in the resulting counterfactual questions.",
"Answer Generation for Baselines For both the above baselines for context selection, we select spans in the new passage that are likely to be answers for a potential counterfactual question.",
"We use a T5 (Raffel et al., 2020) model fine-tuned for question-independent answer selection c a on NQ, and select the top 15 candidates from beam search.",
"To avoid simply repeating the original question, we only retain answer candidates a (cid:48) which do not match the original NQ answers a for that example.",
"These alternate generated answer candidates and associated passages are then used for question generation and filtering as in RGF (3.3).",
"For the Gold Agen-Qgen case, we select based on the longest edit distance between ( q, q (cid:48) ) , which gave significantly better performance than random selection or the shortest edit distance used for RGF.",
"In the reading comprehension (RC) setting, the input consists of the question and context and the task is to identify an answer span in the context.",
"Thus, we augment training with full triples ( q (cid:48) , c (cid:48) , a (cid:48) ) consisting of the retrieved passage c (cid:48) , generated and filtered question q (cid:48) , and alternate answer a (cid:48) .",
"with input consisting of the question prepended to the context.",
"We evaluate domain generalisation of our RC models on three evaluation sets from the MRQA 2019 Challenge (Fisch et al., 2019).",
"We also measure performance on evaluation sets consisting of counterfactual or perturbed versions of RC datasets on Wikipedia, including SQuAD (Ra-jpurkar et al., 2016), AQA (adversarially-generated SQuAD questions; Bartolo et al., 2020), and human authored counterfactual examples (contrast sets; Gardner et al., 2020) from the QUOREF dataset (Dasigi et al., 2019).",
"We also evaluate on the set of disambiguated queries in AmbigQA (Min et al., 2020b), which by construction are minimal edits to queries from the original NQ.",
"Results We report exact-match scores in Table 3; F1 scores follow a similar trend.",
"We observe only limited improvements on the in-domain NQ development set, but we see significant improvements from CDA with RGF data in out-of-domain and challenge-set evaluations compared both to the original NQ model and the Gold and Random baselines.",
"RGF improves by 1-2 EM points on most challenge sets, and up to 7 EM points on the BioASQ set compared to training on NQ only, while baselines often underperform the NQ-only model on these sets.",
"Note that all three augmentation methods have similar proportion of noise (Appendix B), so CDA's benefits may be attributed to improving model's ability to learn more robust features for the task of reading comprehension.",
"Using an ensemble of RC models improves slightly on some tasks, but does not improve on OOD performance as much as RGF.",
"RGF's superior performance compared to the Gold Agen-Qgen baseline is especially interesting, since the latter also generates topically related questions.",
"We observe that RGF counterfactuals are more closely related to the original question compared to this baseline (Fig-ure 5 in Appendix C), since q (cid:48) is derived from a near-miss candidate ( c (cid:48) , a (cid:48) ) to answer the original q (S3.1).",
"In the open-domain (OD) setting, only the question is provided as input.",
"The pair ( q (cid:48) , a (cid:48) ) , consisting of generated and filtered question q (cid:48) and alternate answer a (cid:48) , is used for augmentation.",
"Compared to the RC setting where passages change as well, here the edit distance filtering of 3.4 ensures the augmentation data represents minimal perturbations.",
"Experimental Setting We use the method and implementation from Guu et al. (2020) to finetune REALM on ( q, a ) pairs from NQ.",
"End-to-end training of REALM updates both the reader model and the query-document encoders of the retriever module.",
"We evaluate domain generalization on popular open-domain benchmarks: TriviaQA (Joshi et al., 2017), SQuAD (Rajpurkar et al., 2016), Curated TREC dataset (Min et al., 2021), and disambiguated queries from AmbigQA (Min et al., 2020b).",
"Results In the open-domain setting (Table 4), we observe an improvement of 2 EM points over the original model even in-domain on Natural Questions, while also improving significantly when compared to other data augmentation techniques.",
"RGF improves over the next best baseline Random Agen-Qgen by up to 6 EM points (on TriviaQA).",
"We hypothesize that data augmentation has more benefit in this setting, as the open-domain task is more difficult than reading comprehension, and counterfactual queries may help the model learn better query and document representations to improve retrieval.",
"To better understand how CDA improves the model, we introduce a measure of local consistency (6.1) to measure model robustness, and perform a strat-ified analysis (6.2) to show the benefits of the semantic diversity available from RGF.",
"Compared to synthetic data methods such as PAQ (Lewis et al., 2021), RGF generates counterfactual examples that are paired with the original inputs and concentrated in local neighborhoods around them (Figure 2).",
"As such, we hypothesize that augmentation with this data should specifically improve local consistency, i.e. how the model behaves under small perturbations of the input.",
"Experimental Setting We explicitly measure how well a model's local behavior respects perturbations to input.",
"Specifically, if a model f : ( q, c ) a correctly answers q , how often does it also correctly answer q (cid:48) ?",
"We define pairwise consistency as accuracy over the counterfactuals ( q (cid:48) , a (cid:48) , c (cid:48) ) , conditioned on correct predictions for the original examples: C ( D ) = ED [ f ( q (cid:48) , c (cid:48) ) = a (cid:48) | f ( q, c ) = a ] 1676 Exact Match (OD) Train Size NQ TriviaQA AmbigQA SQuAD v1.0 TREC Original 90K 37.65 26.75 22.43 14.25 31.93 Gold Agen-Qgen 90K + 90K 37.86 27.02 23.65 15.01 32.94 Rand.",
"To measure consistency, we construct validation sets consisting of paired examples ( q, c, a ) , ( q (cid:48) , c (cid:48) , a (cid:48) ) : one original, and one counterfactual.",
"We use QED to categorize our data, as described in 3.5.",
"Specifically, we create two types of pairs:",
"(a) a change in reference where question predicate remains fixed, and",
"(b) a change in predicate, where the original reference(s) are preserved.",
"5 We create a clean evaluation set by first selecting RGF examples for predicate or reference change, then manually filtering the data to discard incorrect triples (4) until we have 1000 evaluation pairs of each type (see Appendix B).",
"We also construct paired versions of AQA, AmbigQA, and the QUOREF contrast set.",
"For AmbigQA, we pair two disambiguated questions and for QUOREF, we pair original and human-authored counterfactuals.",
"AQA consists of human-authored adversarial questions q (cid:48) which are not explicitly paired with original questions; we create pairs by randomly selecting an original question q and a generated question q (cid:48) from the same passage.",
"Results Training with RGF data improves consistency by 12-14 points on the QED-filtered slices of RGF data, and 5-7 points on AQA, AmbigQA and QUOREF contrast (Table 5).",
"The Gold Agen-Qgen baseline (which contains topically related queries about the same passage) also improves consistency over the original model compared to the Random Agen-Qgen baseline or to the ensemble model, though not by as much as RGF.",
"Consistency improvements on AQA, AmbigQA and QUOREF are especially noteworthy, since they suggest an improvement in robustness to local perturbations that is independent of other confounding distributional similarities between training and evaluation data.",
"QED-based decomposition of queries allows for the creation of label-changing counterfactuals along orthogonal dimensions a change of reference or predicate.",
"We investigate whether training towards one type of change induces generalization bias, a detrimental effect which has been observed in tasks such as NLI (Joshi and He, 2021).",
"have the same reference (predicate change) or same predicate (reference change), as defined in 3.5.",
"We over-generate by starting with 20 ( q (cid:48) , c (cid:48) , a (cid:48) ) for each original training example to ensure that we find at least one q (cid:48) that matches the criterion.",
"We also evaluate on paired evaluation sets from 6.1.",
"Results Results are shown for QED-filtered training in Table 5. Counterfactual perturbation of a specific kind (a predicate or a reference change) during augmentation does not hurt performance on another perturbation type compared to the baseline NQ model, which differs from the observations of Joshi and He (2021) on NLI.",
"Furthermore, similar to the observations of Min et al. (2020a), augmenting with one type of perturbation has orthogonal benefits that improve model generalization on another perturbation type: augmenting with RGF ( Pred.) leads to significant improvement on RGF ( Ref.), and vice-versa.",
"Compared to reference-change examples, augmenting with predicate-change examples leads to greater improvements in local consistency, except for on RGF ( Ref.) and on AmbigQA which contains many reference-change pairs.",
"Predicate-change examples may also be more informative to the model, as reference changes can be modeled more easily by lexical matching within common context patterns.",
"Joshi and He (2021) show CDA to be most effective in the low-resource regime.",
"To better understand the role that dataset size plays in CDA in the reading comprehension setting, we evaluate RGF in a cross-domain setting where only a small amount of training data is available.",
"Experimental Setting Since our approach depends on using an open-domain QA model and a question generation model trained on all Natural Questions data, we instead experiment with a low-resource transfer setting on the BioASQ domain, which consists of questions on the biomedical domain.",
"We use the domain-targeted retrieval model from Ma et al. (2021), where synthetic question-passage relevance pairs generated over the PubMed corpus are used to train domain-specific retrieval without any gold supervised data.",
"We fine-tune our question-generation model on (limited) in-domain data, generate RGF data for augmentation, and then use this along with (limited) in-domain data to further fine-tune an RC model, using the NQ-trained weights for initialization.",
"Results We observe significant improvements over the baseline model in the low resource setting for in-domain data (< 2000 examples), as shown in Table 6. Compared with the limited gains we see on the relatively high-resource NQ reading comprehension task, we find that on BioASQ, CDA with 1000 examples improves performance by 2% F1 and 3% exact match, performing nearly as well as a model trained on 2000 gold examples.",
"These results suggest that using counterfactual data in lieu of collecting additional training data is especially useful in the low-resource setting.",
"Retrieve-Generate-Filter (RGF) creates counterfactual examples for QA which are semantically diverse, using knowledge from the passage context and a retrieval model to capture semantic changes that would be difficult to specify a priori with a global schema.",
"The resulting examples are fully-labeled, and can be used directly for training or filtered using meaning representations for analysis.",
"We show that training with this data leads to improvements on open-domain QA, as well as on challenge sets, and leads to significant improvements in local robustness.",
"While we focus on question answering, for which retrieval components are readily available, we note that the RGF paradigm is quite general and could potentially be applied to other tasks with a suitable retrieval system."
] | [
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"result",
"method"
] |
[
"Pretraining and multitask learning are widely used to improve the speech to text translation performance.",
"In this study, we are interested in training a speech to text translation model along with an auxiliary text to text translation task.",
"We conduct a detailed analysis to understand the impact of the auxiliary task on the primary task within the multitask learning framework.",
"Our analysis confirms that multitask learning tends to generate similar decoder representations from different modalities and preserve more information from the pretrained text translation modules.",
"We observe minimal negative transfer effect between the two tasks and sharing more parameters is helpful to transfer knowledge from the text task to the speech task.",
"The analysis also reveals that the modality representation difference at the top decoder layers is still not negligible, and those layers are critical for the translation quality.",
"Inspired by these findings, we propose three methods to improve translation quality.",
"First, a parameter sharing and initialization strategy is proposed to enhance information sharing between the tasks.",
"Second, a novel attention-based regularization is proposed for the encoders and pulls the representations from different modalities closer.",
"Third, an online knowledge distillation is proposed to enhance the knowledge transfer from the text to the speech task.",
"Our experiments show that the proposed approach improves translation performance by more than 2 BLEU over a strong baseline and achieves state-of-the-art results on the MUST-C English-German, English-French and English-Spanish language pairs.",
"in some applications (Niehues et al., 2019; Salesky and Black, 2020).",
"However, the success of end-to-end methods relies on large amounts of training data, which is quite expensive to obtain and relatively small in practice.",
"Building ST systems from pretrained models with multitask learning (MTL) is widely used to overcome the limited training data issue (Weiss et al., 2017; Anastasopoulos and Chiang, 2018; Bahar et al., 2019; Indurthi et al., 2020; Wang et al., 2020b; Li et al., 2020).",
"Nevertheless, little prior work has been devoted to understanding the interactions between different tasks.",
"Standley et al. (2020) conduct an empirical study on computer vision tasks for MTL.",
"They find many as-sumptions for MTL may not be held for specific applications.",
"For example, similar tasks do not necessarily train better together.",
"In this study, we focus on training the ST model along with an auxiliary text to text machine translation (MT) task.",
"We are interested in the task interactions with different modalities and in improving the primary ST task with the help from the auxiliary MT task.",
"The model is initialized with pretrained modules from automatic speech recognition (ASR) and MT. Two types of analysis are conducted on the fine-tuned multitask learned models.",
"The first focuses on the model variation by comparing fine-tuned models with pretrained models for different tasks.",
"The second aims to measure internal representation differences due to different modalities.",
"The analysis leads to three main find-ings.",
"First, the analysis confirms that MTL tends to generate similar model representations for different input modalities and preserves more information from the pretrained MT modules.",
"Second, we do not observe significant negative transfer effect from the MT task to the corresponding ST task.",
"Sharing more parameters is helpful to transfer knowledge to the primary ST task.",
"Finally, the top layers in the ST decoder are more critical to the translation performance and they are also more sensitive to the modality difference.",
"The model representations from different modalities demonstrate larger difference for the top layers in our analysis.",
"Inspired by these findings, we propose three techniques to enhance the performance of the primary ST task.",
"First, we propose to maximize parameter sharing between the ST and MT tasks, i.e. the entire decoder and the top encoder layers.",
"Those shared parameters are initialized with the corresponding MT models.",
"Second, a cross-attentive regularization is introduced for the encoders.",
"It minimizes the L 2 distance between two reconstructed encoder output sequences and encourages the encoder outputs from different modalities to be closer to each other.",
"Finally, an online knowledge distillation learning is introduced for MTL in order to enhance knowledge transfer from the MT to the ST task.",
"Our contributions are summarized as follows: 1. A detailed analysis is conducted on the interaction between the primary ST task and the auxiliary MT task.",
"2. A parameter sharing and initialization strategy are proposed to encourage information sharing between tasks.",
"3. Cross-attentive regularization and online knowledge distillation are proposed to reduce the model representation difference between different modalities and enhance the knowledge transfer from the MT task to the ST task.",
"4. Our system achieves state of the art results on the MUST-C English-German (EN-DE), English-French (EN-FR) and English-Spanish (EN-ES) language pairs, with 2 or more BLEU gains over strong baselines.",
"Multitask learning aims to improve generalization by leveraging domain-specific information contained in the training signals of related tasks (Vandenhende et al., 2020).",
"Compared with single task, MTL has many advantages, such as the potential to improve performance by sharing complementary information or acting as a regularizer.",
"Many previous works focus on learning a good model for all tasks.",
"Chen et al. (2018) study the gradients from different tasks and conduct task dependent gradient normalization to encourage different tasks to learn at similar speed.",
"Maninis et al.",
"(2019); Liu et al. (2019a); Pfeiffer et al. (2020) introduce task-dependent components to enhance individual task performance.",
"Weiss et al. (2017) explore different multitask training strategies for ST, and they find the one-to-many strategy, in which an encoder is shared between the ST and ASR tasks, is more effective.",
"Anastasopoulos and Chiang (2018) further extend it to a triangle structure by concatenating ASR and ST models.",
"Bahar et al. (2019) compare different multitask strategies for the ST task, and they con-firm many-to-one strategy, in which MT and ST are trained together and the decoder is shared between two tasks, is effective if extra bitext data is used.",
"In this work, we carefully study the relation between co-trained tasks in the many-to-one strategy, and the analysis results guide us to propose three techniques to learn more from the auxiliary MT task and enhance the ST performance further.",
"Model analysis Chatterji et al. (2020) propose criticality analysis to measure the importance of different modules from the trained model.",
"Parameters in the selected module or layer are partially rolled back to the initial values, and the module criticality or importance is measured by the performance drop after modification.",
"Larger performance drops indicate a more critical module.",
"Inspired by their work, we extend it to the analysis on the jointly trained models with different pretrained modules and schemes.",
"Raghu et al. (2017); Morcos et al. (2018) propose to employ canonical correlation to measure the similarity between different models given the same input.",
"We extend their work to study a model with inputs from different modalities.",
"The proposed ST system is co-trained with the MT task as depicted in Figure 1. The modules in the primary ST task are connected with dark gray lines and the auxiliary MT task is illustrated with light gray lines.",
"The parameters in the blue modules are shared between the two tasks.",
"During inference with speech input, only modules related to the ST task are used.",
"The model has two encoders, a text encoder and a speech encoder, to take text and speech input respectively.",
"The decoder is shared between the two tasks.",
"To encourage knowledge sharing between the two tasks, the top encoder layers are also shared.",
"The parameters of the shared modules are initialized with a pretrained MT model.",
"A novel cross-attentive regularization is proposed to reduce the distance between encoder outputs from different input modalities.",
"We also introduce a novel online knowledge distillation method where the output from the auxiliary MT task is used to guide the ST model training.",
"The cross-attentive regularization and online knowledge distillation are illustrated as orange modules in Figure 1 and the details are presented in the following two subsections.",
"The cross-attentive regularization (CAR) is proposed to increase the similarity between the text encoder outputs and their corresponding speech encoder outputs.",
"Hence, the performance of the more difficult ST task can be improved by learning from the relatively easier MT task.",
"Encoder output sequences from different modalities can not be compared directly since they have different lengths.",
"In CAR, the two reconstructed sequences are calculated from the text output sequence via self-attention or the speech output sequence via cross attention over the text output sequence.",
"The two reconstructed sequences have the same length and the distance is simply measured as the L 2 distance between the two sequences.",
"Formally, we denote a speech to text translation training sample as a triplet o = ( X s , x t , y ) .",
"X s R d s N , x t RM , and y RK are the speech feature input, text token input and target text output respectively.",
"N , M and K are the corresponding sequence lengths.",
"Assume H s = ( h s 1 , h s 2 , , h sN ) and H t = ( h t 1 , h t 2 , , h tM ) , h sn , h tm R d h are outputs from the speech encoder and text encoder respectively, where d h is the dimension of the output states.",
"A similarity matrix S RN M is defined as the cosine distance between the tensors in the two sequences: s i,j = ( h si ) (cid:48) h tj || h si || 2 || h tj || 2 (1) where s i,j is the i th row and j th column component in S .",
"The text encoder outputs H t are reconstructed through the speech encoder outputs H s and similarity matrix S as below.",
"H t t , the reconstruction of H t from itself, can be computed similarly via self-attention.",
"CAR is defined as the L 2 distance between the two reconstruction encoder outputs: LCAR ( s ) = 1 M (cid:13)(cid:13)(cid:13) H s t sg [ H t t ] (cid:13)(cid:13)(cid:13) 2 (3) where sg [ ] is the stop-gradient operator and s are the ST model parameters.",
"By optimizing the model with CAR, the speech encoder is encouraged to learn from more accurate text encoder and generates similar encoder outputs after reconstruction.",
"CAR is inspired by the attention mechanism between the encoder and decoder where the decoder states are reconstructed through encoder output states via the attention mechanism.",
"Knowledge distillation (KD) is widely used for model compression (Hinton et al., 2015; Kim and Rush, 2016) where a smaller student network is trained to mimic the original teacher network by minimizing the loss between the student and teacher outputs.",
"The ST task is considerably more difficult than the MT task since the speech input is noisier and more ambiguous than the text input.",
"The accuracy of the MT model is usually much higher than the corresponding ST model.",
"Knowledge distillation from a well trained MT model to a ST model has been proved to be an effective way to improve the ST performance (Liu et al., 2019b; Gaido et al., 2020).",
"In this work, we extend knowledge distillation to the MTL framework where both ST and MT are fine-tuned simultaneously with shared parameters.",
"Concretely, we assume an MTL model learns from a data set D with target vocabulary size | V | .",
"The training criterion is to minimize negative log likelihood (NLL) for each example o = ( X s , x t , y ) D from the training data: LNLL ( s ) = D (cid:88) o K (cid:88) k =1 | V | (cid:88) v =1 ( y k = v ) log p ( y k = v | y <k , X s , s ) (4) where ( ) is the indicator function and p the distribution from the ST model (parameterized by s ).",
"Assume the probability distribution for y k given text input x t and MT model t is q ( y k = v | y <k , x t , t ) , the knowledge distillation loss is defined as minimizing the cross-entropy with the MT's probability distribution LKD ( s ) = D (cid:88) o K (cid:88) k =1 | V | (cid:88) v =1 q ( y k = v | y <k , x t , t ) log p ( y k = v | y <k , X s , s ) (5) The overall loss is the combination of cross-attentive regularization, knowledge distillation loss, negative log likelihood loss for both ST and MT, as follows: L ( s , t ) = LNLL ( s ) + (1 ) LKD ( s ) + LCAR ( s ) + LNLL ( t ) (6) where and are predefined hyper-parameters.",
"Experiments are conducted on three MUSTC (Gangi et al., 2019a) language pairs: EN-DE, EN-ES and EN-FR.",
"The models are developed and analyzed on the dev set and the final results are reported on the tst-COMMON set.",
"We use WMT parallel data from different years, 2013 for Spanish, 2014 for German, and 2016 for French, as extra text training corpus for MTL.",
"Case-sensitive deto-kenized BLEU is reported by SACREBLEU with default options (Post, 2018).",
"We use the T-Md configuration from (Wang et al., 2020a) in all experiments.",
"The speech encoder has 12 transformer layers while the decoder is with 6 transformer layers.",
"For the MTL model, the text encoder has 6 transformer layers.",
"The transformer layer has an input embedding size of 512 and middle layer dimension 2048.",
"We share parameters of all 6 text encoder transformer layers with the top 6 transformer layers in the speech encoder, hence both encoders use the same modules to generate the encoder outputs.",
"The Adam optimizer (Kingma and Ba, 2014) with a learning rate 0.002 is employed in the experiments.",
"Label smoothing and dropout rate are both set to 0.1.",
"We choose = 0 .",
"8 and = 0 .",
"02 in Equation 6 through grid search ( [0 . 1 , 1 . 0] for and [0 . 01 , 0 . 05] for ).",
"Input speech is represented as 80D log mel-filterbank coefficients computed every 10ms with a 25ms window.",
"Global channel mean and variance normalization is applied.",
"The SpecAugment (Park et al., 2019) data augmentation with the LB policy is applied in all experiments.",
"The input text tokens are converted into their corresponding pronunciation form as phoneme sequences (Tang et al., 2021; Renduchintala et al., 2018).",
"The grapheme to phoneme conversion is done through the g2p en python package (Lee and Kim, 2018).",
"The leading phoneme in a word is appended with an extra to mark word boundaries.",
"In total, the vocabulary size for the input phonemes is 134.",
"The target vocabulary consists of 10k unigram subword units learned by SentencePiece (Kudo and Richardson, 2018) with full character coverage of all training text data.",
"All ST or jointly trained models are initialized with pretrained ASR and MT modules.",
"The ASR model is trained on the same English speech training data from MUST-C with the T-Md configuration too.",
"The pretrained MT models are trained for each language pair with the aforementioned WMT data.",
"The MT encoder and decoder configurations are the same as the text encoder and decoder in the MTL model mentioned above.",
"The models are fine-tuned to 100 epochs using 8 V100 GPUs for approximate one day.",
"The batch size is 10,000 frames for speech to text translation samples and 10,000 tokens for parallel text samples per GPU.",
"The model parameters are updated every 4 batches.",
"Speech training samples and text input samples are used to update the model alternatively.",
"The models are trained with FAIRSEQ (Ott et al., 2019; Wang et al., 2020a).",
"The last 10 checkpoints are averaged for inference with beam size 5. 1 .",
"We extend Chatterji et al. (2020)'s work to analyze a MTL model.",
"We initialize models with different pretrained modules and fine-tune them for ST and MT tasks within the MTL framework.",
"The pretrained modules come from ASR and MT tasks.",
"Criticality analysis is conducted on the ST model after the MTL fine-tuning step.",
"The parameters in the selected modules are interpolated with corresponding parameters in the pretrained modules.",
"MUST-C EN-DE dev set is used for BLEU computation.",
"With different interpolation ratios, we obtain different BLEU scores.",
"The BLEU difference comes from two sources.",
"The first one comes from the selected module itself.",
"If the module is important and sensitive, very small perturbation could result in a nontrivial BLEU difference as (Chatterji et al., 2020).",
"Another source of difference is that if the selected module changes significantly to adapt to the ST task, rewinding the parameters back to the initial task may lead to a substantial decrease in BLEU.",
"We attempt to quantify the extent of the degradation from the second source, which can be indicative of the model variation from the pretrained task to the ST task.",
"This is accomplished by comparing the BLEU differences for the same module but using different initialization and training schemes.",
"Table 1 lists models initialized with different pretrained modules.",
"ST designates a ST model trained with the single ST task, JT corresponds to a ST model trained with the primary ST task and auxiliary MT task together.",
"JT-S-ASR and JT-S-MT are another two jointly trained models but 1 The source code will be released at https://github.com/pytorch/fairseq/tree/master/examples/speechtextjointtotext",
"with the top encoder layers shared as described in section 4. The difference between the two models is how we initialized the shared encoder layers, either from the pretrained ASR model for JT-S-ASR or from the pretrained MT model for JT-S-MT.",
"ST Figure 2 shows the analysis for the ST model.",
"The x-axis is the interpolation ratio and 1.0 means the pretrained parameters are used.",
"The y-axis is the relative change in BLEU compared with the well-trained ST model.",
"It is clear that higher layers are more critical to the performance .",
"Around 5 BLEU decrease is observed on the top encoder layer (11) and top decoder layer (5) during the criticality tests.",
"The following analysis will compare with Figure 2 and we can separate the aforementioned second source from the first one.",
"JT Figure 3 presents the analysis for the JT model.",
"The jointly trained model shows smaller degradation compared with ST for the decoder layers.",
"This indicates that training the ST and MT tasks together helps to preserve more information from the original MT decoder and partially remedies the catastrophic forgetting (Mc-Closkey and Cohen, 1989) during the finetuning phase.",
"On the other hand, after rolling parameters back to the initial ASR model, the jointly trained model shows a larger degradation for the encoder layers.",
"This means that the speech encoder in the jointly trained model has deviated far away from the speech encoder in the initial ASR task.",
"We conclude that the shared decoder is subject to more constraints since it is optimized toward both MT and ST tasks while the speech encoder has to undergo larger changes in order to align with the text encoder, although there is no parameter sharing between two encoders.",
"the top encoder layers shared are presented in Figure 4 and 5. In JT-S-MT, the top 6 shared encoder layers are initialized with the pretrained MT encoder.",
"We illustrate their BLEU difference trajectories with dotted lines in Figure 5",
"(a) so they can be easily distinguished from other layers initialized from the ASR encoder.",
"The BLEU difference for the top encoder layer is down from 20.2 to 17.6 when the parameters are replaced with the ones in the pretrained ASR encoder.",
"It is further reduced to 10.0 if the shared layers are initialized with MT encoder layers.",
"The BLEU differences in the decoder layers are mixed.",
"The performance of JT-S-ASR degrades quickly in the criticality test for the top decoder layer, while JT-S-MT performs similarly in the test as JT decoder.",
"We argue that the top layers in the fine-tuned ST encoder might be closer to the MT encoder than the ASR encoder.",
"It preserves more information from the MT task by sharing more parameters between two tasks and initializing them with pretrained MT modules .",
"The jointly trained model takes input from two modalities, i.e. text or speech, and we are interested in the model internal representation difference for paired inputs.",
"Given text target y , we extract the decoder hidden state representations for the corresponding text input x t and speech input X s .",
"The decoder representation difference solely comes from different input modalities.",
"The difference is quantified by the correlation coefficient over all samples evaluated between two input modalities: r s,t ( l, d ) = st ( l, d ) s ( l, d ) t ( l, d ) (7) where z ( l, d ) , z [ s, t ] is the standard deviations of decoder hidden states at layer l for component d in all samples, and st ( l, d ) is the corresponding covariance.",
"The layer-wise correlation coefficient is the average of all components: r s,t ( l ) = 1 D (cid:88) d r s,t ( l, d ) (8) Figure 6 depicts the correlation coefficient between speech input and text input for each decoder layer in the model JT-S-MT.",
"The x-axis is the number of training epochs and the y-axis represents the correlation coefficient for each layer.",
"There Data corpus #pars(m) DE ES FR Gangi et al. (2019b) 30 17.7 20.9 26.5 Inaguma et al. (2020) -22.9 28.0 32.7 Pino et al. (2020) 435 25.2 -34.5 ST 76 21.5 28.1 33.8 JT 76 24.1 29.0 35.1 JT Proposed 76 26.8 31.0 37.4 Table 2: BLEU on three language pairs in the MuST-C tst-COMMON datasets.",
"are two observations.",
"First, the correlation coefficients become larger and close to 1.0 as training converges.",
"Second, the higher the layer, the smaller the correlation coefficient.",
"We hypothesize that the inputs to the lower layers are dominated by the decoder text embeddings, which are the same for both modalities, and the inputs to the higher layers would contain more information from the encoder outputs, which result in the decoder internal representation differences.",
"The analysis shows a well trained MTL decoder has similar representations for paired text and speech input.",
"However, the top decoder layers still have nontrivial representation differences due to different modalities .",
"The main ST results are presented in Table 2. The first three rows are results from the literature.",
"ST and JT are models initialized as Table 1 and studied in section 5. The last row (JT Proposed) presents results from the proposed system, in which the top encoder layers and decoder are shared, and the models are optimized following Equation 6. The second column (pars(m)) lists the number of parameters used during inference.",
"From Table 2, our ST baseline is comparable to the previously reported results except (Pino et al., 2020), who use a much larger model and additional weakly supervised speech training data.",
"As expected, the vanilla joint training baseline (JT) outperforms the ST baseline with the help of extra bitext training data.",
"Finally, the proposed joint training model (JT Proposed) achieves 2.0 2.7 BLEU gains over the strong joint training baseline (JT).",
"Table 3 breaks down the performance gains into individual components/changes.",
"Sharing encoder layers improves the quality for all three language pairs EN-DE EN-ES EN-FR JT 24.1 29.0 35.1 JT-S-ASR 24.4 29.4 35.4 JT-S-MT 24.7 29.7 35.3 + CAR 25.0 30.4 36.2 + CAR + KD 26.8 31.0 37.4 Table 3: Ablation study.",
"(JT v.s. JT-S-ASR).",
"Initializing the shared encoder layers with pretrained MT modules leads to BLEU increase for two of the three evaluated translation pairs (JT-S-ASR v.s. JT-S-MT).",
"For EN-FR, the degradation is minimal (-0.1 BLEU).",
"Overall, sharing top encoder layers can increase BLEU by 0.2 0.7 (JT-S-MT v.s. JT).",
"CAR further improves the translation by another 0.3 0.9 BLEU.",
"The best results are achieved by applying the shared top encoder layers, CAR and online KD together.",
"They are about 2.9+ BLEU better than the single task based system (ST) and achieve 2+ BLEU increase on top of the strong vanilla joint training system(JT).",
"Figure 7 demonstrates the model variation for the proposed system on the MUST-C EN-DE dev set.",
"Compared with Figure 5, the decoder shows less degradation during the criticality test and it shows CAR and online KD help to preserve more information from the MT task.",
"Figure 8 shows the corresponding correlation coefficients between paired text and speech input from the top decoder Figure 8: Correlation coefficient for the top decoder layers (epoch 100).",
"layer from different model configurations.",
"It also confirms that the proposed methods, i.e., shared top encoder layers, CAR and online KD, all reduce the modality difference substantially.",
"In MLT, many works (Maninis et al., 2019; Liu et al., 2019a; Zhang et al., 2020; Pfeiffer et al., 2020) employ task-dependent components to alleviate the negative transfer effect.",
"In Table 4, we compare the JT-S-MT model with two variants using different task-dependent components.",
"The first one (JT-S-MT + Adapter) (Bapna et al., 2019) adds an extra adapter module on the top of the speech encoder.",
"Hence, the speech encoder outputs, which are generated from shared encoder layers, are further processed to reduce the difference between speech input and text input.",
"The adapter module consists of a linear layer and layer normalization layer.",
"The second variant (JT-S-MT + Dedicated Attention) (Blackwood et al., 2018) introduces dedicated decoder modules for different tasks.",
"Attention layers between encoder and decoder, and the layer normalization modules are not shared between the ST and MT tasks.",
"It gives the decoder more flexibility to handle information from different modalities.",
"The results show the extra adapter layer doesn't bring gain while the task dependent attention module actually makes the performance worse.",
"It indicates that the negative transfer effect is not significant in this study and adding extra task-dependent components might not be necessary.",
"As shown in Table 2, training ST models with an auxiliary MT task improves the translation quality substantially.",
"It may be interesting to examine the impact on the auxiliary task itself.",
"We evaluate the MT model jointly trained with the ST task.",
"Results are shown in Table 5. ST (JT Proposed) in the first row corresponds to the best results obtained for the ST task.",
"The detailed experimental setup is described in Appendix A. For reference, we also EN-DE EN-ES EN-FR ST (JT Proposed) 26.8 31.0 37.4 MT (Gangi et al., 2019a) 28.1 34.2 42.2 MT 25.4 27.7 33.5 MT (Tuned) 29.6 34.3 41.4 MT (JT) 28.9 33.9 41.6 MT (JT Proposed) 30.5 34.7 42.3 Table 5: Comparison between ST and MT. include the MT evaluation results from MUSTC (Gangi et al., 2019a) in the second row.",
"All MT models (in the last 4 rows) take phoneme sequences as input instead of SentencePiece.",
"MT (row 3) shows the results from pretrained MT models on WMT.",
"In the MT (Tuned) row, the MT models pretrained on WMT are fine-tuned on the MUST-C datasets.",
"The large improvements clearly show a domain mismatch between WMT and MUST-C.",
"The MT models trained with WMT data are improved after fine-tuning, and they are comparable with the ones reported in (Gangi et al., 2019a), though the input token is in pronunciation form, which is more ambiguous than the corresponding SentencePiece unit.",
"MT (JT) and MT (JT Proposed) are results from the co-trained MT models in JT and JT Proposed respectively.",
"After fine-tuning using both MuST-C (speech and text) and WMT (text only) training data, the auxiliary MT models perform better than the corresponding ST models.",
"The proposed techniques further improve the co-trained MT models by 0.7 1.6 BLEU.",
"While this is a surprising result, we note that the dedicated MT models may be improved with better hyperparameter tuning.",
"In conclusion, the results show the proposed methods are effective to unify two tasks into one model with minimal negative transfer effect.",
"In this study, we focus on understanding the interactions between the ST and MT tasks under the MTL framework, and on boosting the performance of the primary ST model with the auxiliary MT task.",
"Two types of analysis on model variation and modality variation, are conducted on the MTL models.",
"The analysis demonstrates MTL helps to preserve information from the MT task and generates similar model representations for different modalities.",
"We observe a minimal negative transfer effect between the two tasks.",
"Sharing more parameters can further boost the information transfer from the MT task to the ST model.",
"The analysis also reveals that the model representation difference due to modality difference is nontrivial, especially for the top decoder layers, which are critical for the translation performance.",
"Inspired by the findings, we propose three techniques to increase knowledge transfer from the MT task to the ST task.",
"These techniques include parameter sharing and initialization strategy to improve the information sharing between tasks, CAR and online KD to encourage the ST system to learn more from the auxiliary MT task and then generate similar model representations from different modalities.",
"Our results show that the proposed methods improve translation performance and achieve state-ofthe-art results on three MUST-C language pairs."
] | [
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"objective"
] |
[
"Current research on spoken language translation (SLT) has to confront with the scarcity of sizeable and publicly available training corpora.",
"This problem hinders the adoption of neural end-to-end approaches, which represent the state of the art in the two parent tasks of SLT: automatic speech recognition and machine translation.",
"To fill this gap, we created MuST-C, a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 8 languages.",
"For each target language, MuST-C comprises at least 385 hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations.",
"Together with a description of the corpus creation methodology (scalable to add new data and cover new languages), we provide an empirical verification of its quality and SLT results computed with strong baseline system on each language direction.",
"Besides the increased computing power, the recent surge of neural end-to-end approaches to natural language processing tasks has been stoked by the increased availability of data.",
"For instance, when supported by sizeable training corpora, the robustness and the strong generalization capabilities of neural networks led to their dominance over previous paradigms both in automatic speech recognition (ASR (Chiu et al., 2018)) and machine translation (MT (Bojar et al., 2018)).",
"Compared to its two parent research areas, spoken language translation (SLT) has not shown such a steady progress yet.",
"Despite recent claims by big industry players about the effectiveness of end-to-end learning (Weiss et al., 2017; Jia et al., 2018), its adoption does not yet represent the mainstream solution to the SLT task.",
"One of the main obstacles Corpus Languages Hours Niehues et al. (2018) En De 273 Kocabiyikoglu et al. (2018) En Fr 236 Tohyama et al. (2005) En Jp 182 Paulik and Waibel (2009) En Es 111 Es En 105 Post et al. (2013) En Es 38 Stuker et al. (2012) De En 37 Shimizu et al. (2014) En Jp 22 Federmann and Lewis (2017) En Jp/Zh 22 Bendazzoli and Sandrelli (2005) En It/Es 18 It Es Berard et al. (2016) Fr En 17 Federmann and Lewis (2016) En Fr/De 8 Woldeyohannis et al. (2017) Am En 7 Godard et al. (2017) Mboshi Fr 4 Table 1: Publicly available SLT corpora.",
"to a stable dominance of the end-to-end paradigm also in this area is the scarcity of training corpora.",
"While cascade ASR+MT solutions can exploit the wealth of task-specific data available for each of the two tasks, 1 the situation for end-to-end model training is much less favourable.",
"As shown in Table 1, few publicly available corpora exist, their language coverage is rather limited and, most importantly, their size is often too small (less than 100 hours of translated audio) for training data-hungry neural models.",
"2 To circumvent the problem, neural SLT approaches currently rely on:",
"1 In resource-rich conditions, ASR and MT training often builds on thousands of hours of transcribed speech and tens of millions of parallel sentences, respectively.",
"2 Besides the corpora reported in Table 1, several smaller ( < 4 hours) freely-available datasets have been created ( e.g. the IWSLT evaluation campaign development and test sets from 2010 to 2017 and the Griko-Italian corpus by Boito et al. (2018)).",
"(Weiss et al., 2017; Anastasopoulos and Chiang, 2018; Berard et al., 2018),",
"iii) encoder/decoder pre-training (Bansal et al., 2018; Berard et al., 2018),",
"iv) synthesized speech data (Berard et al., 2016), or",
"v) machine-translated target text data (Berard et al., 2018).",
"Though effective, solutions",
"ii) and",
"iii) assume the availability of ASR and MT data, which is not always guaranteed (especially in low-resource language settings).",
"Solutions",
"iv) and",
"v) , instead, rely on training material derived from sub-optimal automatic data creation/augmentation procedures.",
"This situation calls for initiatives towards the creation of large, high-quality multilingual corpora suitable to explore end-to-end SLT in more favorable conditions similar to condition",
"i) .",
"Along this direction, our contributions are: A large ( 400 hours of speech per language) multilingual corpus for SLT from English into 8 languages (German, Spanish, French, Italian, Dutch, Portuguese, Romanian and Russian); An empirical verification of its quality; ASR, MT and SLT results computed with strong baseline systems on each language direction.",
"MuST-C is released under a Creative Commons license, Attribution Non Commercial No Derivatives (CC BY NC ND 4.0 International), and is freely downloadable at mustc.fbk.eu 2 Corpus Creation Methodology Must-C was created pursuing high quality as well as large size, speaker variety (male/female, native/non-native) and coverage in terms of topics and languages.",
"To achieve these objectives, similar to (Niehues et al., 2018), we started from English TED Talks, in which a variety of speakers discuss topics spanning from business to science and entertainment.",
"Most importantly, the fact that TED talks are often manually transcribed and translated sets ideal conditions for creating an SLT corpus from high-quality text material.",
"Although the initial data are similar to those used to build the IWSLT18 corpus, our methodology is different.",
"Inspired by Kocabiyikoglu et al. (2018), it exploits automatic alignment procedures, first at the text level (between transcriptions and translations) and then with the corresponding audio segments.",
"More in detail, for each target language L i , the (EnglishL i ) section of MuST-C is created as follows.",
"First, for all the English talks available from the TED website, 3 we download the videos and the HTML files containing the manual transcriptions and their translation into L i .",
"4 Then, the plain text transcription and the translation of each talk are split at the sentence level based on strong punctuation marks and aligned using the Gargantua sentence alignment tool (Braune and Fraser, 2010).",
"This step produces a bilingual text corpus aligned at the sentence level.",
"In the third step, the English side of this bilingual corpus is aligned to the corresponding audio track extracted from the video.",
"This is done using Gentle, 5 an off-the-shelf English forced-aligner built on the Kaldi ASR toolkit (Povey et al., 2011).",
"Next, the audio-text alignments are processed to create a YAML file containing time information ( i.e. start and duration) for each sentence.",
"In this processing step, two filters are applied to weed out potentially noisy segments, or entire talks, based on the number of words that were not aligned by Gentle.",
"First, entire talks are discarded if the proportion of unrecognized words is equal or greater than 15% of the total.",
"This threshold was determined after a manual analysis of 73 talks (those with the highest percentage of unrecognized words).",
"The analysis showed that these cases are representative of different types of noise like:",
"i) non-English speech,",
"ii) long silences,",
"iii) music, non-transcribed songs and videos played during the talk, and",
"iv) wrong transcriptions ( e.g. captions from other talks in the material downloaded from the TED website).",
"The second rule applies to the single sentences of the talks that passed the first filter, and removes those in which none of the words was aligned by Gentle.",
"6 In the last step, the log Mel 40-dimensional filter-bank features commonly used as input representation for ASR (Graves et al., 2013) and SLT (Weiss et al., 2017) are extracted from the 3 www.ted.com dump of April 2018.",
"4 All talks have manual captions, which were also translated into many languages by volunteers.",
"The language coverage of the translations depends on several factors like the age of the talk (the old ones often have more translations), the popularity of its topic and the availability of volunteer translators for a given language.",
"5 github.com/lowerquality/gentle 6 The effectiveness of this filtering criterion was manually verified on random samples.",
"More aggressive solutions will be explored for future releases of the corpus.",
"Table 2 provides basic statistics for the 8 sections of the MuST-C corpus.",
"Comparing the 4 th column with the numbers reported in Table 1, it is worth noting that, in terms of hours of tran-scribed/translated speech, each section is larger than any existing publicly available SLT resource.",
"In this section we present two sets of experiments, which are respectively aimed to:",
"i) empirically assess the quality of the MuST-C corpus (Section 3.3) and",
"ii) compute baseline ASR, MT, and SLT results for future comparisons (Section 3.4).",
"In these experiments, the audio-transcription alignments of MuST-C are used to train and evaluate ASR models, transcription-translation alignments are used for the MT models, and audio-translation alignments are used for the SLT models.",
"ASR and SLT.",
"For our experiments in ASR and SLT we use the same neural architecture.",
"This setting allows us to use the encoder of the ASR models to initialize the weights of the SLT encoders and achieve a faster convergence (Bansal et al., 2018).",
"Our SLT architecture is a variant of the system proposed by Berard et al. (2018), which we re-implemented in the fairseq toolkit (Gehring et al., 2017).",
"The system relies on an attentional encoder-decoder model that takes in input sequences of audio features and outputs the target sequence at the character level.",
"The encoder processes the input with two consecutive fully-connected layers to expand the size of the representation, followed by two 2D strided convolu-7 github.com/neulab/xnmt tional layers that reduce the sequence length.",
"The output of the convolutions is then processed by three stacked LSTMs (Hochreiter and Schmidhu-ber, 1997).",
"The decoder consists of a two-layered deep transition (Pascanu et al., 2014) LSTM with an attention network based on the general soft attention score (Luong et al., 2015).",
"The final output of the decoder is a function of the concatenation of the LSTM output, the context vector and the previous-character embedding.",
"MT. For the MT experiments we use the open source version of ModernMT.",
"8 The system is based on the Transformer (Vaswani et al., 2017) architecture, which represents the state of the art in NMT (Bojar et al., 2018).",
"The encoder consists of a stack of 6 layers, each containing a sequence of two sub-layers, a self-attention network based on multi-head attention, and a position-wise feed-forward layer.",
"The decoder layers have an additional sub-layer: between the self attention and the position-wise feed-forward layer they have an encoder-decoder multi-head attention.",
"All the sublayers in both the encoder and decoder are preceded by layer normalization and are followed by residual connections.",
"In our experiments, texts are tokenized and punctuation is normalized.",
"Furthermore, the English texts are lowercased, while the target language texts are split into characters still preserving the word boundaries.",
"For MT, we segment the English words with the BPE algorithm (Sennrich et al., 2015) using a maximum of 30 K merge operations.",
"The output generation of all models is performed using beam search with a beam size of 5 .",
"ASR performance is measured with word error rate (WER) computed on lower-cased, tokenized texts without punctuation.",
"MT and SLT results are computed with BLEU (Papineni et al., 2002).",
"As observed in Section 2, each section of MuST-C is larger than any other existing publicly available SLT corpus.",
"The usefulness of a resource, however, is not only a matter of size but also of quality (in this case, the quality of the audio transcription translation alignments).",
"For an empirical verification of this aspect, we experimented with two comparable datasets.",
"One is 8 www.modernmt.eu the TED-derived English-German IWSLT18 corpus (Niehues et al., 2018), which is built following a pipeline that performs segment extraction and alignment based on time information ( i.e. start and end position of each segment in the SubRip Text (SRT) files) instead of text-level alignments.",
"The other is the English-German subset of MuST-C derived from the same TED Talks used to build the IWSLT18 corpus.",
"On one side (MuST-C), the number of segments, their length, and the overall corpus quality depend on text-level alignments.",
"On the other side (IWSLT18), they depend on matching time stamps.",
"This strategy, however, has some drawbacks.",
"First, as pointed out by (Niehues et al., 2018; Liu et al., 2018; Di Gangi et al., 2018), the use of time information brings some noise in the corpus.",
"Second, it often results in utterance-level alignment (based on speakers' pauses in the original audio).",
"Compared to sentence-level alignment, this level of granularity can be sub-optimal during model training ( e.g. for MT and SLT, learning from complete sentences is easier than learning from phrases).",
"Finally, time information about the recorded speech is not always available: bypassing this need would make the method replicable on other data (not only TED-like).",
"Though initialized with the same set of 1 , 619 talks, the two pipelines produce different corpora.",
"As shown in Table 3, our approach filters out 58 entire talks ( 3.6% of the total) but the final number of segments, their corresponding audio duration and their average length (in words) are larger.",
"Corpus #Talk #Sent Hours src w tgt w IWSLT18 1,619 176K 280 2.7M 2.5M MuST-C 1,561 179K 313 3.3M 3.1M Table 3: Statistics of the English-German corpora created by applying the IWSLT18 and MuST-C pipelines to the same initial set of 1 , 619 TED Talks.",
"Each corpus was divided into training, development and test.",
"Development and test contain segments from randomly selected common talks ( i.e. those preserved by the MuST-C pipeline).",
"Their size is respectively 2 .",
"3 K (from 28 talks) and 2 .",
"1 K segments (from 26 talks).",
"The test portions were concatenated to create a balanced test set ( 4 . 2 K segments) containing half of the instances from the IWSLT18 corpus and half from MuST-C.",
"The remaining material was used to separately train ASR, MT and SLT models on homogeneous data from either of the two corpora ( i.e. three systems Training set ASR ( ) MT ( ) SLT ( ) IWSLT18 42.15 24.90 8.94 MuST-C 32.05 25.46 12.25 Table 4: Performance of ASR, MT and SLT systems trained with En-De IWSLT18 and MuST-C data.",
"per corpus).",
"All the systems are evaluated on the common test set.",
"Table 4 shows that the models trained on MuST-C data achieve better results on the balanced test set in all the three tasks.",
"In particular:",
"i) a reduction of 10 .",
"1 WER points in ASR indicates a higher quality of audio transcription alignments,",
"ii) a BLEU increase of 0 .",
"56 points in MT indicates a similar quality for transcription translation alignments, and",
"iii) a BLEU increase of 3 .",
"31 points in SLT indicates a higher quality of audio translation alignments.",
"We consider these results as evidence of the reliability of our corpus creation methodology.",
"Being the same for all the language pairs, we expect this procedure to end up in comparable quality for all the 8 sections of MuST-C.",
"We finally present baseline results computed, for all the three tasks, on each section of MuST-C.",
"Also for these experiments, development and test data are created with segments from talks that are common to all the languages.",
"Their size is respectively 1 .",
"4 K (from 11 talks) and 2 .",
"5 K segments (from 27 talks).",
"The remaining data (of variable size depending on the language pairs) are used for training.",
"For the sake of replicability, these splits are preserved in the released version of MuST-C.",
"The results in Table 5 lead to the following observations.",
"First, though not directly comparable since they are computed on different test sets, English-German results are in line (actually higher, since they are produced by models built on larger training data) with those presented in Section 3.3.",
"This indicates that the level of quality observed in the previous experiments with a subset of the training data is preserved by the whole material released for this language pair.",
"Second, looking at the other language pairs, ASR, MT and SLT results are comparable with the English-German scores.",
"Besides normal fluctuations in the optimization of the neural models, performance differences are coherent with:",
"i) the relative difficulty of each target language ( e.g. Russian is more dif-ficult due to high inflection) and",
"ii) the variable quantity of training data available ( e.g. French has the largest training set, see Table 2).",
"Overall, these explainable differences suggest that our corpus creation methodology yields homogeneous quality for all the languages covered by MuST-C.",
"We presented MuST-C, a Mu ltilingual S peech T ranslation C orpus built to address the need of resources for training data-hungry neural SLT models.",
"To the best of our knowledge, to date MuST-C is the largest publicly available corpus of this kind.",
"In its current version, it comprises the English transcription and the translations into 8 target languages of at least 385 hours of speech (up to 504 ) per language.",
"Thanks to a scalable corpus creation procedure initialized with constantly expanding TED talks data, future extensions will increase the coverage of the already present target languages and introduce new ones.",
"MuST-C is released under a Creative Commons license, Attribution Non Commercial No Derivatives (CC BY NC ND 4.0 International), and is freely downloadable at mustc.fbk.eu Acknowledgments The authors gratefully acknowledge NVIDIA Corporation for the donation of the Tesla K80 and GeForce GTX 1080 Ti GPUs used for this research."
] | [
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"When answering natural language questions over knowledge bases (KBs), different question components and KB aspects play different roles.",
"However, most existing embedding-based methods for knowledge base question answering (KBQA) ignore the subtle interrelationships between the question and the KB (e.g., entity types, relation paths and context).",
"In this work, we propose to directly model the two-way flow of interactions between the questions and the KB via a novel Bidirectional Attentive Memory Network, called BAMnet.",
"Requiring no external resources and only very few hand-crafted features, on the WebQuestions benchmark, our method significantly outperforms existing information-retrieval based methods, and remains competitive with (hand-crafted) semantic parsing based methods.",
"Also, since we use attention mechanisms, our method offers better interpretability compared to other baselines.",
"With the rapid growth in large-scale knowledge bases (KBs) such as DBPedia (Auer et al., 2007) and FreeBase (Google, 2018), knowledge base question answering (KBQA) has drawn increasing attention over the past few years.",
"Given questions in natural language (NL), the goal of KBQA is to automatically find answers from the underlying KB , which provides a more natural and intuitive way to access the vast underlying knowledge resources.",
"One of the most prominent challenges of KBQA is the lexical gap.",
"For instance, the same question can be expressed in various ways in NL while a KB usually has a canonical lexicon.",
"It is therefore nontrivial to map an NL question to a structured KB.",
"The approaches proposed to tackle the KBQA task can be roughly categorized into two groups: semantic parsing (SP) and information retrieval (IR) approaches.",
"SP-based approaches address the problem by constructing a semantic parser that converts NL questions into intermediate logic forms, which can be executed against a KB.",
"Traditional semantic parsers (Wong and Mooney, 2007) require annotated logical forms as supervision, and are limited to narrow domains with a small number of logical predicates.",
"Recent efforts overcome these limitations via the construction of hand-crafted rules or features (Abujabal et al., 2017; Hu et al., 2018) schema matching (Cai and Yates, 2013), and using weak supervision from external resources (Krish-namurthy and Mitchell, 2012).",
"Unlike SP-based approaches that usually assume a pre-defined set of lexical triggers or rules, which limit their domains and scalability, IR-based approaches directly retrieve answers from the KB in light of the information conveyed in the questions.",
"These IR-based approaches usually do not require hand-made rules and can therefore scale better to large and complex KBs.",
"Recently, deep neural networks have been shown to produce strong results on many NLP tasks.",
"In the field of KBQA, under the umbrella of IR-based approaches, many embedding-based methods (Bordes et al., 2014b; Hao et al., 2017) have been proposed and have shown promising results.",
"These methods adopt various ways to encode questions and KB subgraphs into a common embedding space and directly match them in that space, and can be typically trained in an end-to-end manner.",
"Compared to existing embedding-based methods that encode questions and KB subgraphs independently, we introduce a novel B idirectional A ttentive M emory net work, called BAMnet that captures the mutual interactions between questions and the underlying KB, which is stored in a content-addressable memory.",
"We assume that the world knowledge (i.e., the KB) is helpful for better understanding the questions.",
"Similarly, the questions themselves can help us focus on important KB aspects.",
"To this end, we design a two-layered bidirectional attention network .",
"The primary attention network is intended to focus on important parts of a question in light of the KB and important KB aspects in light of the question.",
"Built on top of that, the secondary attention network is intended to enhance the question and KB representations by further exploiting the two-way attention.",
"Through this idea of hierarchical two-way attention , we are able to distill the information that is the most relevant to answering the questions on both sides of the question and KB.",
"We highlight the contributions of this paper as follows:",
"1) we propose a novel bidirectional attentive memory network for the task of KBQA which is intended to directly model the two-way interactions between questions and the KB;",
"2) by design, our method offers good interpretability thanks to the attention mechanisms;",
"3) on the WebQuestions benchmark, our method significantly outperforms previous information-retrieval based methods while remaining competitive with (hand-crafted) semantic parsing based methods.",
"Two broad classes of SP-based and IR-based approaches have been proposed for KBQA.",
"The former attempts to convert NL questions to logic forms.",
"Recent work focused on approaches based on weak supervision from either external resources (Krishnamurthy and Mitchell, 2012; Berant et al., 2013; Yao and Van Durme, 2014; Hu et al., 2018; Yih et al., 2015; Yavuz et al., 2016), schema matching (Cai and Yates, 2013), or using hand-crafted rules and features (Unger et al., 2012; Berant et al., 2013; Berant and Liang, 2015; Reddy et al., 2016; Bao et al., 2016; Abujabal et al., 2017; Hu et al., 2018; Bast and Haussmann, 2015; Yih et al., 2015).",
"A thread of research has been explored to generate semantic query graphs from NL questions such as using coarse alignment between phrases and predicates (Berant et al., 2013), searching partial logical forms via an agenda-based strategy (Berant and Liang, 2015), pushing down the disambiguation step into the query evaluation stage (Hu et al., 2018), or exploiting rich syntactic information in NL questions (Xu et al., 2018a,b).",
"Notably, another thread of SP-based approaches try to exploit IR-based techniques (Yao and Van Durme, 2014; Bast and Haussmann, 2015; Yang et al., 2014; Yih et al., 2015; Bao et al., 2016; Yavuz et al., 2016; Liang et al., 2016) by computing the similarity of two sequences as features, leveraging a neural network-based answer type prediction model, or training end-to-end neural symbolic machine via REINFORCE (Williams, 1992).",
"However, most SP-based approaches more or less rely on handcrafted rules or features, which limits their scala-bility and transferability.",
"The other line of work (the IR-based) has focused on mapping answers and questions into the same embedding space, where one could query any KB independent of its schema without requiring any grammar or lexicon.",
"Bordes et al. (2014b) were the first to apply an embedding-based approach for KBQA.",
"Later, Bordes et al. (2014a) proposed the idea of subgraph embedding, which encodes more information (e.g., answer path and context) about the candidate answer.",
"In follow-up work (Bordes et al., 2015; Jain, 2016), memory networks (We-ston et al., 2014) were used to store candidates, and could be accessed iteratively to mimic multi-hop reasoning.",
"Unlike the above methods that mainly use a bag-of-words (BOW) representation to encode questions and KB resources, (Dong et al., 2015; Hao et al., 2017) apply more advanced network modules (e.g., CNNs and LSTMs) to encode questions.",
"Hybrid methods have also been proposed (Feng et al., 2016; Xu et al., 2016; Das et al., 2017), which achieve improved results by leveraging additional knowledge sources such as free text.",
"While most embedding-based approaches encode questions and answers independently, (Hao et al., 2017) proposed a cross-attention mechanism to encode questions according to various candidate answer aspects.",
"Differently, in this work, our method goes one step further by modeling the bidirectional interactions between questions and a KB.",
"The idea of bidirectional attention proposed in this work is similar to those applied in machine reading comprehension (Wang and Jiang, 2016; Seo et al., 2016; Xiong et al., 2016).",
"However, these previous works focus on capturing the interactions between two bodies of text, in this work, we focus on modeling the interactions between one body of text and a KB.",
"Given an NL question, the goal is to fetch answers from the underlying KB.",
"Our proposed BAMnet Figure 1: Overall architecture of the BAMnet model.",
"model consists of four components which are the input module , memory module , reasoning module and answer module , as shown in Fig. 1.",
"An input NL question Q = { q i } | Q | i =1 is represented as a sequence of word embeddings ( q i ) by applying a word embedding layer.",
"We then use a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) to encode the question as HQ (in R d | Q | ) which is the sequence of hidden states (i.e., the concatenation of forward and backward hidden states) generated by the BiLSTM.",
"Candidate generation Even though all the entities from the KB could in principle be candidate answers, this is computationally expensive and unnecessary in practice.",
"We only consider those entities which are close to the main topic entity of a question.",
"An answer is the text description (e.g., a name) of an entity node.",
"For example, Ohio is the topic entity of the question Who was the secretary of state of Ohio in 2011? (see Fig. 2).",
"After getting the topic entity, we collect all the entities connected to it within h hops as candidate answers, which we denote as { A i } | A | i =1 .",
"KB representation For each candidate answer from the KB, we encode three types of information: answer type, path and context.",
"Answer type Entity type information is an important clue in ranking answers.",
"For example, if a question uses the interrogative word where , then candidate answers with types relevant to the concept of location are more likely to be correct.",
"We use a BiLSTM to encode its text description to get Figure 2: A working example from Freebase.",
"a d -dimensional vector H t 1 i (i.e., the concatenation of last forward and backward hidden states).",
"Answer path We define an answer path as a sequence of relations from a candidate answer to a topic entity.",
"For example, for the Ohio question (see Fig. 2), the answer path of Jon A. Husted can be either represented as a sequence of relation ids [ office holder, governing officials ] or the text description [ office, holder, governing, officials ] .",
"We thus encode an answer path as H p 1 i via a BiLSTM, and as H p 2 i by computing the average of its relation embeddings via a relation embedding layer.",
"Answer context The answer context is defined as the surrounding entities (e.g., sibling nodes) of a candidate which can help answer questions with constraints.",
"For example, in Fig. 2, the answer context of Jon A. Husted includes the government position title secretary of state and starting date 2011-01-09 .",
"However, for simple questions without constraints, the answer context is unnecessary and can potentially incorporate noise.",
"We tackle this issue with two strategies:",
"1) we use a novel importance module (explained later) to focus on important answer aspects, and",
"2) we only consider those context nodes that have overlap with the question.",
"Specifically, for each context node (i.e., a sequence of words) of a candidate, we first compute the longest common subsequence between it and the question, we then encode it via a BiLSTM only if we get a non-stopwords substring.",
"Finally, the answer context of a candidate answer will be encoded as the average of all context node representations, which we denote as H ci .",
"Key-value memory module In our model, we use a key-value memory network (Miller et al., 2016) to store candidate answers.",
"Unlike a basic memory network (Weston et al., 2014), its addressing stage is based on the key memory while the reading stage uses the value memory, which gives greater flexi-bility to encode prior knowledge via functionality separation.",
"Thus, after encoding the answer type, path and context, we apply linear projections on them as follows: M k t i = f kt ( H t 1 i ) M v t i = f vt ( H t 1 i ) M k p i = f kp ([ H p 1 i ; H p 2 i ]) M v p i = f vp ([ H p 1 i ; H p 2 i ]) M k c i = f kc ( H ci ) M v c i = f vc ( H ci ) (1) where M k t i and M v t i are d -dimensional key and value representations of answer type A ti , respectively.",
"Similarly, we have key and value representations for answer path and answer context.",
"We denote M as a key-value memory whose row M i = { M ki , M vi } (both in R d 3 ), where M ki = [ M k t i ; M k p i ; M k c i ] comprises the keys, and M vi = [ M v t i ; M v p i ; M v c i ] comprises the values.",
"Here [ , ] and [; ] denote row-wise and column-wise concatenations, respectively.",
"The reasoning module consists of a generalization module, and our novel two-layered bidirectional attention network which aims at capturing the two-way interactions between questions and the KB.",
"The primary attention network contains the KB-aware attention module which focuses on the important parts of a question in light of the KB, and the importance module which focuses on the important KB aspects in light of the question.",
"The secondary attention network ( enhancing module in Fig.",
"1) is intended to enhance the question and KB vectors by further exploiting the two-way attention.",
"KB-aware attention module Not all words in a question are created equal.",
"We use a KB-aware attention mechanism to focus on important components of a question, as shown in Fig. 3.",
"Specifi-cally, we first apply self-attention (SelfAtt) over all question word vectors HQ to get a d -dimensional question vector q as follows q = BiLSTM ([ HQAQQT , HQ ]) AQQ = softmax (( HQ ) THQ ) (2) where softmax is applied over the last dimension of an input tensor by default.",
"Using question summary q , we apply another attention (AddAtt) over the memory to obtain answer type m t , path m p and context summary m c : m x = | A | (cid:88) i =1 a xi M v x i a x = Att add ( q , M k x ) (3) where x { t, p, c } , and Att add ( x , y ) = softmax ( tanh ([ x T , y ] W 1 ) W 2 ) , with W 1 R 2 d d and W 2 R d 1 being trainable weights.",
"So far, we have obtained the KB summary m = [ m t ; m p ; m c ] in light of the question.",
"We proceed to compute the question-to-KB attention between question word q i and KB aspects as formulated by A Qm = HQT m .",
"By applying max pooling over the last dimension (i.e., the KB aspect dimension) of A Qm , that is, a Qi = max j A Qmij , we select the strongest connection between q i and the KB.",
"The idea behind it is that each word in a question serves a specific purpose (i.e., indicating answer type, path or context), and max pooling can help find out that purpose.",
"We then apply a softmax over the resulting vector to obtain a Q which is a KB-aware question attention vector since it indicates the importance of q i in light of the KB.",
"Importance module The importance module focuses on important KB aspects as measured by their relevance to the questions.",
"We start by computing a | Q | | A | 3 attention tensor AQM which indicates the strength of connection between each pair of { q i , A xj } x = { t,p,c } .",
"Then, we take the max of the question word dimension of AQM and normalize it to get an attention matrix AM , which indicates the importance of each answer aspect for each candidate answer.",
"After that, we proceed to compute question-aware memory representations M k .",
"Thus, we have: M v = { M vi } | A | i =1 R | A | d M vi = 3 (cid:88) j =1 M vij M k = { M ki } | A | i =1 R | A | d M ki = A Mi M ki AM = softmax ( AMT ) TAM = max i { A QMi } | Q | i =1 AQM = (cid:0) M k HQ (cid:1) T (4) Enhancing module We further enhance the question and KB representations by exploiting two-way attention.",
"We compute the KB-enhanced question representation q which incorporates the relevant KB information by applying max pooling over the last dimension (i.e., the answer aspect dimension) of AQM , that is, AQM = max k { A",
"QM.,.,k } 3 k =1 , and then normalizing it to get a question-to-KB attention matrix AQM from which we compute the question-aware KB summary and incorporate it into the question representation HQ = HQ + a Q (cid:12) ( AQM M v ) T .",
"Finally, we obtain a d dimensional KB-enhanced question representation q = HQ a Q .",
"Similarly, we compute a question-enhanced KB representation M k which incorporates the relevant question information: M k = M k + a M (cid:12) ( AMQ ( HQ ) T ) a M = ( AQM ) T a Q R | A | 1 AMQ = softmax ( AQMT ) R | A || Q | (5) Generalization module We add a one-hop attention process before answering.",
"We use the question representation q to query over the key memory M k via an attention mechanism, and fetch the most relevant information from the value memory, which is then used to update the question vector using a GRU (Cho et al., 2014).",
"Finally, we apply a residual layer (He et al., 2016) (i.e., y = f ( x ) + x ) and batch normalization (BN) (Ioffe and Szegedy, 2015), which help the model performance in practice.",
"Thus, we have q = BN ( q + q (cid:48) ) q (cid:48) = GRU ( q , m ) m = | A | (cid:88) i =1 a i M vi a = Att GRUadd ( q , M k ) (6) 3.4 Answer module Given the representation of question Q which is q and the representation of candidate answers { A i } | A | i =1 which is { M ki } | A | i =1 , we compute the matching score S ( q , M ki ) between every pair ( Q, A i ) as S ( q , a ) = q T a .",
"The candidate answers are then ranked by their scores.",
"Training Intermediate modules such as the enhancing module generate premature representations of questions (e.g., q ) and candidate answers (e.g., M k ).",
"Even though these intermediate representations are not optimal for answer prediction, we can still use them along with the final representations to jointly train the model, which we find helps the training probably by providing more supervision since we are directly forcing intermediate representations to be helpful for prediction.",
"Moreover, we directly match interrogative words to KB answer types.",
"A question Q is represented by a 16-dimensional interrogative word (we use which, what, who, whose, whom, where, when, how, why and whether) embedding q w and a candidate answer A i is represented by entity type embedding H t 2 i with the same size.",
"We then compute the matching score S ( q w , H t 2 i ) between them.",
"Although we only have weak labels (e.g., incorrect answers do not necessarily imply incorrect types) for the type matching task, and there are no shared representations between two tasks, we find in practice this strategy helps the training process as shown in Section 4.4.",
"Loss Function: In the training phase, we force positive candidates to have higher scores than negative candidates by using a triplet-based loss function: o = g ( HQ a Q , 3 (cid:88) j =1 M",
"function, and A + and A denote the positive (i.e., correct) and negative (i.e., incorrect) answer sets, respectively.",
"Note that at training time, the candidate answers are extracted from the KB subgraph of the gold-standard topic entity, with the memory size set to N max .",
"We adopt the following sampling strategy which works well in practice: if N max is larger than the number of positive answers | A + | , we keep all the positive answers and randomly select negative answers to fill up the memory; otherwise, we randomly select min( N max / 2 , | A | ) negative answers and fill up the remaining memory with random positive answers.",
"Testing At testing time, we need to first find the topic entity.",
"We do this by using the top result returned by a separately trained topic entity predictor (we also compare with the result returned by the Freebase Search API).",
"Then, the answer module returns the candidate answer with the highest scores as predicted answers.",
"Since there can be multiple answers to a given question, the candidates whose scores are close to the highest score within a certain margin, , are regarded as good answers as well.",
"Therefore, we formulate the inference process as follows: A = { a | a A & max a (cid:48) A { S ( q , M ka (cid:48) ) } S ( q , M k a ) < } (8) where max a (cid:48) A { S ( q , M ka (cid:48) ) } is the score of the best matched answer and A is the predicted answer set.",
"Note that is a hyper-parameter which controls the degree of tolerance.",
"Decreasing the value of makes the model become stricter when predicting answers.",
"Given a question Q , the goal of a topic entity predictor is to find the best topic entity c from the candidate set { C i } | C | i =1 returned by external topic entity linking tools (we use the Freebase Search API and S-MART (Yang and Chang, 2016) in our experi-ments).",
"We use a convolutional network (CNN) to encode Q into a d -dimensional vector e .",
"For candidate topic entity C i , we encode three types of KB aspects, namely, the entity name, entity type and surrounding relations where both entity name and type are represented as a sequence of words while surrounding relations are represented as a bag of sequences of words.",
"Specifically, we use three CNNs to encode them into three d -dimensional vectors, namely, C ni , C ti and C r 1 i .",
"Note that for surrounding relations, we first encode each of the relations and then compute their average.",
"Additionally, we compute an average of the relation embeddings via a relation embedding layer which we denote as C r 2 i .",
"We then apply linear projections on the above vectors as follows: P ki = f k ([ C ni ; C ti ; C r 1 i ; C r 2 i ]) P vi = f v ([ C ni ; C ti ; C r 1 i ; C r 2 i ]) (9) where P ki and P vi are d -dimensional key and value representations of candidate C i , respectively.",
"Furthermore, we compute the updated question vector e using the generalization module mentioned earlier.",
"Next, we use a dot product to compute the similarity score between Q and C i .",
"A triplet-based loss function is used as formulated by o = g ( e , P ki ) + g ( e , P ki ) where g ( . ) is the aforementioned hinge loss function.",
"When training the predictor, along with the candidates returned from external entity linking tools, we do negative sampling (using string matching) to get more supervision.",
"In the testing phase, the candidate with the highest score is returned as the best topic entity and no negative sampling is applied.",
"This section provides an extensive evaluation of our proposed BAMnet model against state-of-the-art KBQA methods.",
"The implementation of BAMnet is available at https://github.",
"com/hugochan/BAMnet .",
"dataset, described below: Freebase This is a large-scale KB (Google, 2018) that consists of general facts organized as subject-property-object",
"subject-property-object triples.",
"It has 41M non-numeric entities, 19K properties, and 596M assertions.",
"WebQuestions This dataset (Berant et al., 2013) ( nlp.stanford.edu/software/sempre ) contains 3,778 training examples and 2,032 test examples.",
"We further split the training instances into a training set and development set via a 80%/20% split.",
"Approximately 85% of questions can be directly answered via a single FreeBase predicate.",
"Also, each question can have multiple answers.",
"In our experiments, we use a development version of the dataset (Baudis and Pichl, 2016), which additionally provides (potentially noisy) entity mentions for each question.",
"Following (Berant et al., 2013), macro F1 scores (i.e., the average of F1 scores over all questions) are reported on the WebQuestions test set.",
"When constructing the vocabularies of words, entity types or relation types, we only consider those questions and their corresponding KB subgraphs appearing in the training and validation sets.",
"The vocabulary size of words is V = 100 , 797 .",
"There are 1,712 entity types and 4,996 relation types in the KB subgraphs.",
"Notably, in FreeBase, one entity might have multiple entity types.",
"We only use the first one available, which is typically the most concrete one.",
"For those non-entity nodes which are boolean values or numbers, we use bool or num as their types, respectively.",
"We also adopt a query delexicalization strategy where for each question, the topic entity mention as well as constraint entity mentions (i.e., those belonging to date, ordinal or number) are replaced with their types.",
"When encoding KB context, if the overlap belongs to the above types, we also do this delexicalization, which will guarantee it matches up with the delexicalized question well in the embedding space.",
"Given a topic entity, we extract its 2-hop subgraph (i.e., h = 2 ) to collect candidate answers, which is sufficient for WebQuestions .",
"At training time, the memory size is limited to N max = 96 candidate answers (for the sake of efficiency).",
"If there are more potential candidates, we do random sampling as mentioned earlier.",
"We initialize word embeddings with pre-trained GloVe vectors (Pen-nington et al., 2014) with word embedding size d v = 300 .",
"The relation embedding size d p , entity type embedding size d t and hidden size d are set as 128, 16 and 128, respectively.",
"The dropout rates on the word embedding layer, question encoder side and the answer encoder side are 0.3, 0.3 and 0.2, respectively.",
"The batch size is set as 32, and answer module threshold = 0 .",
"7 .",
"As for the topic entity prediction, we use the same hyperparame-ters.",
"For each question, there are 15 candidates after negative sampling in the training time.",
"When encoding a question, we use a CNN with filter sizes 2 and 3.",
"A linear projection is applied to merge features extracted with different filters.",
"When encoding a candidate aspect, we use a CNN with filter size 3.",
"Linear activation and max-pooling are used together with CNNs.",
"In the training process, we use the Adam optimizer (Kingma and Ba, 2014) to train the model.",
"The initial learning rate is set as 0.001 which is reduced by a factor of 10 if no improvement is observed on the validation set in 3 consecutive epochs.",
"The training procedure stops if no improvement is observed on the validation set in 10 consecutive epochs.",
"The hyper-parameters are tuned on the development set.",
"As shown in Table 1, our method can achieve an F1 score of 0.557 when the gold topic entity is known, which gives an upper bound of our model performance.",
"When the gold topic entity is unknown, we report the results using:",
"1) the Freebase Search API, which achieves a recall@1 score of 0.857 on the test set for topic entity linking, and",
"2) the topic entity predictor, which achieves a recall@1 score of 0.898 for entity retrieval.",
"As for the performance of BAMnet on WebQuestions, it achieves an F1 score of 0.518 using the topic entity predictor, which is significantly better than the F1 score of 0.497 using the Freebase Search API.",
"We can observe that BAMnet significantly outperforms previous state-of-the-art IR-based methods, which conclusively demonstrates the effectiveness of modeling bidirectional interactions between questions and the KB.",
"It is important to note that unlike the state-of-the-art SP-based methods, BAMnet relies on no external resources and very few hand-crafted features, but still remains competitive with those approaches.",
"Based on careful hand-drafted rules, some SP-based methods (Bao et al., 2016; Yih et al., 2015) can better model questions with constraints and aggregations.",
"For example, (Yih et al., 2015) applies many manually designed rules and features to improve performance on questions with constraints and aggregations, and (Bao et al., 2016) directly models temporal (e.g., after 2000), ordinal (e.g., first) and aggregation constraints (e.g., how many) by adding detected constraint nodes to query graphs.",
"In contrast, our method is end-to-end, with very few hand-crafted rules.",
"Additionally, (Yavuz et al., 2016; Bao et al., 2016) train their models on external Q&A datasets to get extra supervision.",
"For a fairer comparison, we only show their results without training on external Q&A datasets.",
"Similarly, for hyhrid systems (Feng et al., 2016; Xu et al., 2016), we only report results without using Wikipedia free text.",
"It is in-Methods (ref) Macro F 1 SP-based (Berant et al., 2013) 0.357 (Yao and Van Durme, 2014) 0.443 (Wang et al., 2014) 0.453 (Bast and Haussmann, 2015) 0.494 (Berant and Liang, 2015) 0.497 (Yih et al., 2015) 0.525 (Reddy et al., 2016) 0.503 (Yavuz et al., 2016) 0.516 (Bao et al., 2016) 0.524 (Feng et al., 2016) 0.471 (Reddy et al., 2017) 0.495 (Abujabal et al., 2017) 0.510 (Hu et al., 2018) 0.496 IR-based (Bordes et al., 2014a) 0.392 (Yang et al., 2014) 0.413 (Dong et al., 2015) 0.408 (Bordes et al., 2015) 0.422 (Xu et al., 2016) 0.471 (Hao et al., 2017) 0.429 Our Method: BAMnet w/ gold topic entity 0.557 w/ Freebase Search API 0.497 w/ topic entity predictor 0.518 Table 1: Results on the WebQuestions test set.",
"teresting to note that both (Yih et al., 2015) and (Bao et al., 2016) also use the ClueWeb dataset for learning more accurate semantics.",
"The F1 score of (Yih et al., 2015) drops from 0.525 to 0.509 if ClueWeb information is removed.",
"To summarize, BAMnet achieves state-of-the-art performance of 0.518 without recourse to any external resources and relies only on very few hand-crafted features.",
"If we assume gold-topic entities are given then BAMnet achieves an F1 of 0.557.",
"We now discuss the performance impact of the different modules and strategies in BAMnet.",
"Note that gold topic entity is assumed to be known when we do this ablation study, because the error introduced by topic entity prediction might reduce the real performance impact of a module or strategy.",
"As shown in Table 2, significant performance drops were observed after turning off some key attention Methods Macro F 1 all 0.557 w/o two-layered bidirectional attn 0.534 w/o kb-aware attn (+self-attn) 0.544 w/o importance module 0.540 w/o enhancing module 0.550 w/o generalization module 0.542 w/o joint type matching 0.545 w/o topic entity delexicalization 0.529 w/o constraint delexicalization 0.554 Table 2: Ablation results on the WebQuestions test set.",
"modules, which confirms that the real power of our method comes from the idea of hierarchical two-way attention.",
"As we can see, when turning off the two-layered bidirectional attention network , the model performance drops from 0.557 to 0.534.",
"Among all submodules in the attention network, the importance module is the most significant since the F1 score drops to 0.540 without it, thereby confirming the effectiveness of modeling the query-to-KB attention flow.",
"On the flip side, the importance of modeling the KB-to-query attention flow is confirmed by the fact that replacing the KB-aware attention module with self-attention significantly degrades the performance.",
"Besides, the secondary attention layer, the enhancing module , also contributes to the overall model performance.",
"Finally, we find that the topic entity delexicalization strategy has a big influence on the model performance while the constraint delexicalization strategy only marginally boosts the performance.",
"Here, we show that our method does capture the mutual interactions between question words",
"and KB aspects, by visualizing the attention matrix AQM produced by the reasoning module .",
"Fig. 4 shows the attention heatmap generated for a test question who did location surrender to in number (where location and number are entity types which replace the topic entity mention France and the constraint entity mention ww2, respectively in the original question).",
"As we can see, the attention network successfully detects the interactions between who and answer type, surrender to and answer path, and focuses more on those words when encoding the question.",
"To further examine the importance of the two-way flow of interactions, in Table 3, we show the predicted answers of BAMnet with and without the two-layered bidirectional attention network on samples questions from the WebQuestions test set.",
"We divide the questions into three categories based on which kind of KB aspect is the most crucial for answering them.",
"As we can see, compared to the simplified version which is not equipped with bidirectional attention, our model is more capable of answering all the three types of questions.",
"To better examine the limitations of our approach, we randomly sampled 100 questions on which our method performed poorly (i.e., with per-question F1 score less than 0.6), and categorized the errors.",
"We found that around 33% of errors are due to label issues of gold answers and are not real mistakes.",
"This includes incomplete and erroneous labels, and also alternative correct answers.",
"Constraints are another source of errors (11%), with temporal constraints accounting for most.",
"Some questions have implicit temporal (e.g., tense) constraints which our method does not model.",
"A third source of error is what we term type errors (13%), for which our method generates more answers than needed because of poorly utilizing answer type information.",
"Lexical gap is another source of errors (5%).",
"Finally, other sources of errors (38%) include topic entity prediction error, question ambiguity, incomplete answers and other miscellaneous errors.",
"We introduced a novel and effective bidirectional attentive memory network for the purpose of KBQA.",
"To our best knowledge, we are the first to model the mutual interactions between questions and a KB, which allows us to distill the information that is the most relevant to answering the questions on both sides of the question and KB.",
"Experimental results show that our method significantly outperforms previous IR-based methods while remaining competitive with hand-crafted SP-based methods.",
"Both ablation study and interpretability analysis verify the effectiveness of the idea of modeling mutual interactions.",
"In addition, our error analysis shows that our method actually performs better than what the evaluation metrics indicate.",
"This work is supported by IBM Research AI through the IBM AI Horizons Network.",
"We thank the anonymous reviewers for their constructive suggestions."
] | [
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"objective",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"result",
"other",
"other"
] |
[
"Recent years have witnessed growing interests in incorporating external knowledge such as pre-trained word embeddings (PWEs) or pre-trained language models (PLMs) into neural topic modeling.",
"However, we found that employing PWEs and PLMs for topic modeling only achieved limited performance improvements but with huge computational overhead.",
"In this paper, we propose a novel strategy to incorporate external knowledge into neural topic modeling where the neural topic model is pre-trained on a large corpus and then fine-tuned on the target dataset.",
"Experiments have been conducted on three datasets and results show that the proposed approach significantly outperforms both current state-of-the-art neural topic models and some topic modeling approaches enhanced with PWEs or PLMs.",
"Moreover, further study shows that the proposed approach greatly reduces the need for the huge size of training data.",
"Topic models have been widely used for discovering hidden themes from a large collection of documents in an unsupervised manner.",
"Recently, to avoid the complex and specific inference process of graph model-based method such as LDA (Blei et al., 2003), neural topic modeling that utilizes neural-network-based black-box inference has been the main research direction in this field (Blei, 2012; Miao et al., 2016; Srivastava and Sutton, 2017).",
"Typically, neural topic models infer topics of a document by utilizing its bag-of-words (BoWs) representation to capture word co-occurrence patterns.",
"The BoWs representation, however, fails to encode rich word semantics, leading to relatively inferior quality of topics generated by the topic models.",
"Therefore, approaches have been proposed to address the limitation of BoWs representation Corresponding author.",
"by incorporating the external knowledge, such as pre-trained word embeddings (PWEs) (Das et al., 2015; Wang et al., 2020; Dieng et al., 2020).",
"In recent years, pre-trained language models (PLMs) (Peters et al., 2018; Devlin et al., 2019; Brown et al., 2020) have achieved state-of-the-art performance on a wide range of natural language processing tasks.",
"Different from PWEs 1 in which a word is mapped to a static word emebdding, PLMs generate a specific word embedding for each occurrence of a word depending on the context.",
"It is appealing to incorporate PLMs into topic models since contextualized embeddings generated by PLMs encode richer semantics and naturally deal with word polysemy (Pasini et al., 2020).",
"One straightforward way is to replace BoWs representation with the outputs of PLM (Bianchi et al., 2020b) in existing topic models or take PLM outputs as additional inputs to topic modeling (Bianchi et al., 2020a).",
"A more sophisticated approach is to distill the knowledge of a PLM into a topic model.",
"For example, (Hoyle et al., 2020) employed the probability estimates of a teacher PLM over a text sequence to guide the training of a student topic model.",
"However, the approaches mentioned above still have limitations.",
"Firstly, using PLMs for topic model training in such ways leads to huge computational overhead.",
"Most neural topic models are based on shallow multi-layer perceptions with few hidden units.",
"However, most popular PLMs are based on deep Transformers (Vaswani et al., 2017) where at each layer expensive self-attention operations are performed, which have a time complexity quadratic in document length.",
"Therefore, the overall training time is dominated by PLM, and it will be worse if PLM is further fine-tuned, as shown in (Hoyle et al., 2020).",
"Secondly, there is the gap of training objectives between PLMs and topic models, where PLMs are trained to learn the semantic 1 In this paper, PWEs refer to context-free embeddings.",
"and syntactic knowledge within a sentence while topic models focus on extracting main themes over whole corpus.",
"As shown in Table 4, a model based on GloVe embeddings (Pennington et al., 2014) performs better than PLMs-based models such as those proposed in (Bianchi et al., 2020a) and (Bianchi et al., 2020b).",
"To overcome these challenges, we propose a simple yet effective strategy, namely Pre-trained Neural Topic Model (PT-NTM), to utilize extensive knowledge from large corpora for neural topic modeling with low computational complexity.",
"Instead of pre-training the embeddings and acquiring knowledge indirectly, PT-NTM directly pre-trains the topic model itself on the knowledge source corpora.",
"In specific, a neural topic model is firstly trained on a large corpus only once, which is called pre-training .",
"Afterward, it is fine-tuned on any other dataset, which is called fine-tuning .",
"As the architecture of the neural topic model used in pretraining and fine-tuning is the same, it incurs little computational overhead to any subsequent training.",
"Experiments have been conducted on three datasets and the results show that the proposed approach significantly outperforms not only some state-of-the-art neural topic models but also the topic modeling approaches using PWEs and PLMs.",
"Moreover, it is observed that on the NYTimes dataset, the neural topic model trained on 1% of the whole dataset using the proposed approach achieves superior performance than other baseline models that are trained on the whole dataset.",
"It further shows that the proposed approach greatly reduces the need for the huge size of training data.",
"The main contributions are: We proposed a simple yet effective strategy for training neural topic models in which the models are pre-trained on a large corpus and then fine-tuned on a specific dataset.",
"We conducted extensive experiments and the results show that the pre-trained neural topic models significantly outperform baselines in terms of topic coherence and topic diversity.",
"The proposed approach greatly reduces the amount of training data needed.",
"In our experiments on the NYTimes dataset, a pre-trained model fine-tuned with 1% of documents achieves superior performance than baselines that are trained on the whole dataset.",
"Due to the flexible modeling choices and high representation capacity, neural networks have been widely used for topic modeling in recent years.",
"Some approaches (Kingma and Welling, 2013; Miao et al., 2016) model topics with variational autoencoders (VAEs) and view the latent variables of VAEs as document topics.",
"However, topic models typically use Dirichlet distribution as the prior of multinomial topic distributions, while the repa-rameterization trick required by VAEs hinders the usage of a Dirichlet prior.",
"Therefore, some followup works (Srivastava and Sutton, 2017; Card et al., 2018) used logistic normal to approximate Dirichlet.",
"Another family of neural topic models (Nan et al., 2019; Wang et al., 2020; Hu et al., 2020) overcome the problem with adversarial training (Goodfellow et al., 2014) by encouraging the model to generate topic distributions that are similar to samples randomly drawn from a Dirichlet prior.",
"There are mainly two ways to incorporate external knowledge into topic modeling, namely by PWEs",
"and PLMs.",
"Some attempts incorporate pre-trained word representations into neural topic models.",
"For example, (Card et al., 2018; Dieng et al., 2020) used PWEs to initialize word embeddings of topic models.",
"(Wang et al., 2020) built a generative process that models word embeddings with per-topic Gaussian distributions.",
"Beyond static word embeddings, researchers also tried to utilize PLMs.",
"(Bianchi et al., 2020b,a) treated PLM outputs as an additional knowledge source to enhance or replace BoW-based inputs.",
"(Hoyle et al., 2020) employed knowledge distillation to guide the training of a student topic model with a PLM teacher network.",
"Recently, (Song et al., 2020) proposed TopicOcean to train LDA-based topic models on large corpora and then transfer the knowledge of accumulated topics to new corpora which can also be considered a way of pre-training.",
"It should be pointed out that the proposed PT-NTM differs from the previous PLMs-based topic models or TopicOcean in that the architecture of neural topic models during pre-training and fine-tuning are the same in PT-NTM while other methods combine the large PLM with the topic models, the two different model architectures.",
"In this section, we describe the detailed processes of PT-NTM.",
"First, we will introduce the architecture of neural topic model, which we call NTM in the following, employed in PT-NTM.",
"Then, we will introduce how to pre-train the neural topic model on a large-scale dataset.",
"Finally, we will introduce how to fine-tune the pre-trained neural topic model on the target dataset.",
"For the architecture of NTM, we follow the encoder-decoder architecture, as employed by many neural topic models (Srivastava and Sutton, 2017; Miao et al., 2017; Nan et al., 2019).",
"The encoder takes a document's BoW x RV as input and infers its topic distribution z RK , where V is the vocabulary size and K the topic number.",
"The decoder then reconstructs the original document from z , denoted as x .",
"The whole architecture of NTM is shown in Figure 1.",
"In specific, the encoder is a stack of N + 1 MLP layers.",
"From the bottom to the top, the first N layers have an identical structure.",
"Each layer has four sub-layers: Dropout (Srivastava et al., 2014), Linear, BatchNorm (Ioffe and Szegedy, 2015), and LeakyReLU (Maas et al., 2013).",
"The final layer is a Dropout sub-layer and a Linear transformation followed by a Softmax.",
"The decoder shares the same architecture as the encoder, though they may vary in input/output dimensions.",
"In our experiments, we set a Dropout probability of 0 .",
"5 in the first encoder layer and 0 .",
"2 in the remaining encoder and decoder layers.",
"All LeakyReLU sub-layers have a negative slope of 0 .",
"01 .",
"which encourages the decoder outputs X = { x ( i ) } mi =1 to be as similar as the corresponding encoder inputs X = { x ( i ) } mi =1 for each training batch, where m is the batch size.",
"For topic distribution z , what we have done above is insufficient to generate reasonable topics since z 's distribution Q is not well defined.",
"To this end, we follow a similar approach proposed in (Nan et al., 2019) and further impose on z a Dirichlet prior P by minimizing the Maximum Mean Discrepancy (MMD) (Gretton et al., 2012) between the two distributions P and Q : LMMD ( Z, Z ) = 2 m 2 (cid:88) i,j k ( z ( i ) , z ( j ) )+ 1 m ( m 1) (cid:88) i = j ( k ( z ( i ) , z ( j ) ) + k ( z ( i ) , z ( j ) )) , (2) where Z = { z ( i ) } mi =1 are topic distributions randomly drawn from the prior P , Z = { z ( i ) } mi =1 are encoder outputs, and k is the kernel function that is information diffusion kernel (Lebanon and Lafferty, 2003) in our experiments following (Nan et al., 2019).",
"where we balance L rec and LMMD with a hyperpa-rameter and another factor",
"where 2 denotes L2 normalization and b ( N +1) is the bias term of the last Linear sub-layer of the encoder, i.e., the one just before the Softmax sublayer.",
"Equation (4) shows that the two losses are balanced with their relative gradient norm with respect to b ( N +1) .",
"We found in our experiments that r greatly reduces the effort of tuning and generally produces better results.",
"By pre-training the topic model on a large and topically diverse corpus, we expect the model would learn topic-related knowledge that is general enough to be reused on other corpora.",
"For the proposed approach, the knowledge may include word semantics, common senses, and document encoding and decoding patterns at each layer.",
"The details of the pre-training procedure are presented in Algorithm 1.",
"The pre-training corpus D is the subset00 of the OpenWebText dataset (Gokaslan and Cohen, 2019), an open-source recreation of the WebText dataset as detailed in (Radford et al., 2019).",
"We preprocess data by tokenization, lemmatization, stopword removal, and only keeping words occurred in at least 50 documents.",
"After preprocessing, there are about 392K documents, consisting of 45K unique words, in the resulting dataset.",
"At each training mini-batch, we update model parameters according to Equation (3) using the Adam optimizer (Kingma and Ba, 2014).",
"Algorithm 1 Pre-training.",
"Require: D , the pre-training corpus; E , the encoder; D , the decoder; , parameters of E and D ; 0 , initial parameters; m , the batch size; n , the number of training epochs; P ( z ) , the Dirichlet prior.",
"1: 0 2: for i = 1 , , n do 3: Shuffle D .",
"4: for each X = { x ( j ) } mj =1 from D do 5: Z E ( X ) ; X D ( Z ) 6: Sample Z = { z ( j ) } mj =1 P ( z ) .",
"7: Compute L by Equation (3).",
"8: Adam ( 1 m (cid:80) mj =1 L ( j ) , ) 9: end for 10: end for 3.3 Fine-tuning Fine-tuning is the process of adapting the pre-trained topic model to a specific dataset.",
"However, directly fine-tuning the pre-trained model on a new dataset does not always work and may introduce severe bias to subsequent tuning steps since the ideal number of topics might change and the corpus-wide topic distributions might be different.",
"Therefore, our fine-tuning begins with the pre-trained model but randomly re-initializes parameters in the last encoder layer and the first decoder layer.",
"If we fine-tune the model without any re-initialization, we find that in our experiments the corpus-wide topic distributions discovered by the fine-tuned model would be biased towards the topic distribution of the pre-training corpus, which is unexpected.",
"The proposed fine-tuning strategy with re-initialization solves this issue.",
"Algorithm 2 shows the fine-tuning steps.",
"We keep the pre-trained parameters fixed for the first n 1 epochs and use a small learning rate in the remaining training epochs since they have already been well trained before fine-tuning.",
"Require: D , the target corpus; E , the encoder; D , the decoder; r , randomly initialized parameters; p , pre-trained parameters; m , the batch size; n , the number of training epochs; n 1 , n 1 N and 0 n 1 n ; P ( z ) , the Dirichlet prior.",
"1: for i = 1 , , n do 2: Shuffle D .",
"3: for each X = { x ( j ) } mj =1 from D do 4: Z E ( X ) ; X D ( Z ) 5: Sample Z = { z ( j ) } mj =1 P ( z ) .",
"6: Compute L by Equation (3).",
"7: r Adam ( r 1 m (cid:80) mj =1 L ( j ) , r ) 8: if i > n 1 then 9: p Adam ( p 1 m (cid:80) mj =1 L ( j ) , p ) 10: end if 11: end for 12: end for By comparing Algorithm 1 with Algorithm 2, it can be observed that the fine-tuning process adds little overhead to the training stage.",
"More importantly, the proposed method does not introduce any additional computations or parameters during inference.",
"We used three datasets in (Hu et al., 2020): NYTimes 2 , Grolier 3 , and 20Newsgroups 4 .",
"We did not include the DBPedia dataset as it is based on Wikipedia and potentially overlaps with the dataset used for our pre-training.",
"The dataset statistics are shown in Table 1.",
"The proposed basic model, NTM, is the one described in Section 3 without pre-training.",
"Both the 2 http://archive.ics.uci.edu/ml/ datasets/Bag+of+Words 3 https://cs.nyu.edu/~roweis/data 4 http://qwone.com/~jason/20Newsgroups 5983 Dataset #Documents Vocabulary Size NYTimes 99,992 12,604 Grolier 29,762 15,276 20Newsgroups 11,258 2,000 Table 1: Dataset statistics.",
"encoder and the decoder have three layers ( N = 2 ) and 300 neurons at each hidden layer.",
"We have four variants: NTM-w2v, we initialize weights w e 1 RV 300 of the first encoder Linear sub-layer and w d 3 R 300 V of the the last decoder Linear sub-layer with the corresponding 300 dim Word2Vec embeddings trained on Google News.",
"NTM-glv, same as NTM-w2v but utilizing 300 -dim GloVe embeddings trained on Wikipedia and Gigaword",
"5. PT-NTM-w2v, pre-training from NTM-w2v initialization and then fine-tuning.",
"PT-NTM-glv, pre-training from NTM-glv initialization and then fine-tuning.",
"The number of training epochs is 200 for pretraining, fine-tuning (PT-* models) and fresh training (NTM).",
"We used the Dirichlet prior distribution whose parameters are all 1 K , where K is the topic number.",
"MMD loss weight is 1 for all models expect the fine-tuning of *-pre models in which is 0.3.",
"We will analyze the effect of in our experiments.",
"During pre-training, the batch size is 1,024, the learning rate is 2e-2, and the topic number is 200.",
"For fine-tuning, n 1 is 100, and the learning rates for reinitialized and pre-trained parameters are 2e-2 and 1e-5, respectively (Algorithm 2), showing that the pre-trained parameters are only slightly tuned.",
"The batch size of fine-tuning and fresh training varies on different datasets depending on their sizes.",
"Specifically, it is set to 128 for 20Newsgroups, 256 for Grolier and 512 for NYTimes.",
"Finally, it should be noted that fine-tuning on each datasets shares the same pre-trained model checkpoint for each model variant.",
"LDA (Blei et al., 2003), we used the implementation of GibbsLDA++ 5 .",
"ProdLDA (Srivastava and Sutton, 2017), a VAE-based model that employs logistic normal prior for topic distributions.",
"W-LDA (Nan et al., 2019).",
"Our model follows W-LDA loss but differs in training and implementation.",
"BAT (Wang et al., 2020), an adversarially trained neural topic model.",
"ToMCAT (Hu et al., 2020), an adversarial neural topic model with cycle-consistency objective.",
"ZeroShotTM (Bianchi et al., 2020b), taking Sentence-BERT (Reimers and Gurevych, 2019) embeddings as input.",
"CombinedTM (Bianchi et al., 2020a), same as ZeroShotTM but combining the input with BoWs.",
"G-BAT (Wang et al., 2020), extending BAT to incorporate pre-trained word embeddings.",
"TopicOcean (Song et al., 2020), integrating well-trained LDAs and transferring the knowledge of accumulated topics to new corpora, which is re-implemented by ourselves.",
"We evaluate the model performance with three topic coherence measures and one topic diversity measure.",
"Topic coherence measures first calculate the coherence scores of pairs of top words ranked by their topic-associated probabilities for each topic and then aggregate all topic scores as the final topic coherence.",
"The used topic coherence measures are C_A (Aletras and Stevenson, 2013), C_P (Rder et al., 2015), and NPMI (Aletras and Stevenson, 2013) of top-10 topic words, implemented in Palmetto (Rder et al., 2015) 6 .",
"Topic coherence measures are highly correlated with human evaluation but have no penalizing mechanism for repetitive or similar topics.",
"We remedy the problem by also evaluating topic diversity.",
"Our topic diversity measure is calculate by TD = 1 N rep N total , where N total = 10 K is the total number of topic words and N rep counts the number of repetitions in all topic words.",
"For example, 5 identical words would add 4 to N rep .",
"The topic modeling results are presented in Table",
"2. We report results averaged over five runs with topic number set to 20, 30, 50, 75, and 100 respectively in all our experiments unless otherwise specified.",
"From Table 2, we can observe that: 1) Among all models, PT-NTM and its variants outperform other methods by a large margin.",
"Since PT-NTM and NTM share the identical model architecture, we attribute the improvements of PT-NTM over NTM to the pre-training strategy.",
"2) For PLMs-based methods, both ZeroShotTM and CombinedTM performs badly, for some metric even worse than regular methods.",
"We think the reason maybe the gap between the learning objectives of PLMs (word order-based) and topic models (word-cooccurrence based).",
"3) For PWEs-based methods, non-pretrained methods (NTM, BAT) benefits a lot from the PWEs.",
"We think the reason maybe the PWEs are also trained based on word-cooccurrence, so the gap between PWEs and topic models is relatively small.",
"Another interesting thing is that the benefit of using PWEs in topic modeling seems diminishing with our proposed topic model pretraining strategy.",
"For example, PT-NTM gives similar results compared to PT-NTM-w2v and PT-NTM-glv.",
"This shows that word semantic knowledge has somehow been captured to a certain degree by pre-training the topic model on a large corpus.",
"4) For pre-training-based models, PT-NTM outperforms TopicOcean, consider the performance gap between their base models (NTM for PT-NTM and LDA for TopicOcean), the improvement of PT-NTM is even larager.",
"What's more, our method is based on neural network, which is easier to incorporated with PWEs or other information than TopicOcean, which is based on graphical models.",
"One concern about PT-NTM may be that the whether the fine tuning stage works.",
"To get a sense of the topics extracted by our model, we list in Table 3 top 4 topics extracted by PT-NTM on the pre training and fine tuning dataset.",
"The topic labels are assigned manually.",
"The whole topics are presented in the attachment.",
"Contextualized word embeddings like those produced by BERT (Devlin et al., 2019) provide richer semantic than static ones like Word2Vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014).",
"Thus we also conducted experiments to test their performance on topic modeling.",
"The baseline models are ZeroShotTM (Bianchi et al., 2020b) and CombinedTM (Bianchi et al., 2020a).",
"ZeroShotTM and CombinedTM both take Sentence-BERT (Reimers and Gurevych, 2019) embeddings as inputs but CombinedTM additionally uses BoW.",
"We also implement three NTM-based models, namely BERT-NTM, Word2Vec-NTM, and GloVe-NTM, according to the input embeddings they used.",
"BERT-NTM follows the idea of ZeroShotTM, aim-5985 OpenWebText (Pre-training) NYTimes (Fine-tuning) Tesla Drug TPP GPU Racism Cuisine Health Weddding tesla marijuana tpp gtx racist shrimp fat wedding autonomous legalization nafta geforce racism sauce protein daughter waymo cannabi ustr nvidia trump cuisine calories bride driverless legalize trade amd black broth carbohydrate mother car norml freeland gpu feminist basil cup gown musk drug trump radeon political pork diet father vehicle dispensary tpa evga racial onion sugar wife autopilot decriminalization fta directx politic pastry chocolate husband automaker recreational mexico sli party garlic cholesterol sister hyperloop prohibition climate mhz women chef vitamin son Grolier (Fine-tuning) 20Newsgroups (Fine-tuning) Myth Artist History Biology Politics Terrorist Football Crime thor art emperor biology clinton bomb player police norse picasso empire organism president fbi game cop mythology artist justinian evolutionary bush fire team officer poseidon museum ottoman species tax waco nhl woman chariot sculpture byzantine physiology senate kill coach gun goddess painting throne gene political police defensive car athena exhibition king molecular secretary soldier season man god pollock roman fossil government military draft fbi sword portrait serbian genetic economy weapon winnipeg murder dragon monet war evolution administration terrorist league suspect Table 3: Top 4 topics extracted by PT-NTM on OpenWebText, NYTimes, Grolier and 20Newsgroups dataset.",
"ing at providing a fair comparison between BERT-based topic models.",
"Word2Vec-NTM only uses pre-trained embeddings in the encoder, which is different from NTM-w2v as the latter use the the pre-trained Word2Vec embeddings in both the first encoder layer and the last decoder layer.",
"The same setup applies to GloVe-NTM.",
"The experimental results on 20Newsgroups 7 are shown in Table",
"4. All the models have similar topic diversity.",
"Our NTM variants outperform both ZeroShotTM and CombinedTM on all three topic coherence measures.",
"The possible reasons could be: 1) Topic modeling does not quite rely on word order information, at least for our experimented dataset; and 2) Training of GloVe utilizes global word-word co-occurrence statistics that are also helpful for topic modeling.",
"As topic modeling 7 The other two datasets only contain word counts, making it impossible to extract BERT embeddings since no word context information is present.",
"can be viewed as a form of word clustering, our results are somewhat inline with previous findings reported in Meng et al. (2019) that using BERT leads to poor performance on text clustering.",
"Number of model layers We vary the number of encoder and decoder layers of pre-training and fine-tuning models, and show the results in Table",
"5. It can be observed that the four-layer and the three-layer models achieve the highest topic coherence and topic diversity respectively.",
"Further increasing the layer number resulted in slight declines in all four metrics.",
"MMD loss weight We present the impact of on our model in Figure",
"2. With increasing from 0.03 to 30, the NPMI of PT-NTM-glv first gradually increases, peaking at about 0.14 when = 1 , and then gradually decreases.",
"For Topic Diversity (TD), however, we observe a steady decline for 5986 0.03 0.1 0.3 1 3 10 30 0 0 .",
"PT-NTM-glv.",
"PT-NTM also has a similar trend but with more drastic changes.",
"Given these findings, it seems that there is a trade-off towards generating more coherent or diverse topics.",
"Nevertheless, it is worth noting that in comparison to NTM, the PT-NTM-glv is very robust to the choices of .",
"The NPMI values of PT-NTM-glv only fluctuate in the range of [0 . 11 , 0 . 14] while its TD values vary between 0.74 and 0.86.",
"This is in contrast to NTM in which it has poor topic coherence for 0 .",
"1 and low topic diversity for 10 .",
"We attribute the advantage of the pre-trained model to our proposed fine-tuning strategy.",
"During fine-tuning, we mainly update a small set of parameters that are directly related to topics while only slightly tune others, which consequently enables more controllable data/gradient flows and thus produces more stable results.",
"Data efficiency With pre-training, a topic model indeed captures extensive knowledge from an external corpus.",
"As have been shown in our experiments, the acquired knowledge can improve the performance of subsequent fine-tuning on other datasets, It would be interesting to see to what extent such knowledge can increase data efficiency.",
"To this end, we conducted experiments that take subsets of NYTimes dataset of varying sizes as training datasets.",
"Specifically, we used dataset sizes including 1K, 2K, 4K, , 64K, and 100K.",
"For each size, we averaged the results over five runs whose training datasets are randomly sampled from the whole dataset with different random seeds.",
"The results are shown in Figure",
"3. PT-NTM-glv has a very high starting point when the document number is 1000: the NPMI and TD is about 0.15 1 2 4 8 16 32 64 100 0 0 .",
"and 0.89 respectively.",
"While at the same time, NTM has extremely poor performance with negative NPMI and low TD.",
"Only when the document number increases to 8000, the topics generated by NTM has comparable topic diversity to topics from PT-NTM-glv.",
"But even when the whole dataset is used by PT-NTM, i.e., the document number is 100K, NTM's NPMI is still about 0.08 lower than the 1000-document PT-NTM-glv, which indeed represents a significant difference in topic quality.",
"In summary, pre-training the topic model greatly reduces the need for training data and helps the model achieve superior performance with only 1% of documents on the NYTimes dataset.",
"In this paper, we proposed a simple yet effective strategy to incorporating external knowledge into neural topic modeling by pre-training topic models on a large corpus before fine-tuning them on specific datasets.",
"By experiments, we have presented the effectiveness of the method of pre-trained neural topic model in terms of topic coherence, topic diversity, and data efficiency over other methods such as by incorporating PWEs and PLMs.",
"Another advantage of this approach is that it introduces little overhead to the training and none to the inference.",
"Limited by computing resources, we did not experiment pre-trainings on larger datasets, though we believe there is still room for improvement given more pre-training data.",
"For future research, we encourage further explorations in model architectures, pre-training objectives, and fine-tuning procedures.",
"We would like to thank anonymous reviewers for their valuable comments and helpful suggestions and we thank Tencent for supporting this project.",
"This work was funded by the National Natural Science Foundation of China (61772132, 62176053)."
] | [
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"objective",
"other",
"other"
] |
[
"Many applications of computational social science aim to infer causal conclusions from nonexperimental data.",
"Such observational data often contains confounders , variables that influence both potential causes and potential effects.",
"Unmeasured or latent confounders can bias causal estimates, and this has motivated interest in measuring potential confounders from observed text.",
"For example, an individual's entire history of social media posts or the content of a news article could provide a rich measurement of multiple confounders.",
"Yet, methods and applications for this problem are scattered across different communities and evaluation practices are inconsistent.",
"This review is the first to gather and categorize these examples and provide a guide to data-processing and evaluation decisions.",
"Despite increased attention on adjusting for confounding using text, there are still many open problems, which we highlight in this paper.",
"In contrast to descriptive or predictive tasks, causal inference aims to understand how intervening on one variable affects another variable (Holland, 1986; Pearl, 2000; Morgan and Winship, 2015; Imbens and Rubin, 2015; Hernan and Robins, 2020).",
"Specifically, many applied researchers aim to estimate the size of a specific causal effect, the effect of a single treatment variable on an outcome variable.",
"However, a major challenge in causal inference is addressing confounders , variables that influence both treatment and outcome.",
"For example, consider estimating the size of the causal effect of smoking (treatment) on life expectancy (outcome).",
"Occupation is a potential confounder that may influence both the propensity to smoke and life expectancy.",
"Estimating the effect of treatment on outcome without accounting for this confounding could result in Figure 1: Left: A causal diagram for text that encodes causal confounders, the setting that is focus of this review paper.",
"To eliminate confounding bias, one approach is to perform randomized controlled trials (RCTs) in which researchers randomly assign treatment.",
"Yet, in many research areas such as healthcare, education, or economics, randomly assigning treatment is either infeasible or unethical.",
"For instance, in our running example, one cannot ethically randomly assign participants to smoke since this could expose them to major health risks.",
"In such cases, researchers instead use observational data and adjust for the confounding bias statistically with methods such as matching, propensity score weighting, or regression adjustment ( 5).",
"In causal research about human behavior and society, there are potentially many latent confounding variables that can be measured from unstructured text data.",
"Text data could either",
"(a) serve as a surrogate for potential confounders; or",
"(b) the language of text itself could be a confounder.",
"Our running example is an instance of text as a surrogate: a researcher may not have a record of an individual's occupation but could attempt to measure this variable from the individual's entire history of social media posts (see Fig. 1).",
"An example of text as a direct confounder: the linguistic content of social media posts could influence censorship (treatment) and future posting rates (outcome) (Roberts et al., 2020).",
"A challenging aspect of this research design is the high-dimensional nature of text.",
"Other work has explored general methods for adjusting for high-dimensional confounders (D'Amour et al., 2017; Rassen et al., 2011; Louizos et al., 2017; Li et al., 2016; Athey et al., 2017).",
"However, text data differ from other high-dimensional data-types because intermediate confounding adjustments can be read and evaluated by humans ( 6) and designing meaningful representations of text is still an open research question.",
"1 Even when applying simple adjustment methods, a practitioner must first transform text into a lower-dimensional representation via, for example, filtered word counts, lexicon indicators, topic models, or embeddings ( 4).",
"An additional challenge is that empirical evaluation in causal inference is still an open research area (Dorie et al., 2019; Gentzel et al., 2019) and text adds to the difficulty of this evaluation ( 7).",
"We narrow the scope of this paper to review methods and applications with text data as a causal confounder .",
"In the broader area of text and causal inference, work has examined text as a mediator (Veitch et al., 2019), text as treatment (Fong and Grimmer, 2016; Egami et al.; Wood-Doughty et al., 2018; Tan et al., 2014), text as outcome (Egami et al.), causal discovery from text (Mani and Cooper, 2000), and predictive (Granger) causality with text (Balashankar et al., 2019; del Prado Martin and Brendel, 2016; Tabari et al., 2018).",
"Outside of this prior work, there has been relatively little interaction between natural language processing (NLP) research and causal inference.",
"NLP has a rich history of applied modeling and diagnostic pipelines that causal inference could draw upon.",
"Because applications and methods for text 1 For instance, there have been four workshops on representation learning at major NLP conferences in the last four years (Blunsom et al., 2016, 2017; Augenstein et al., 2018, 2019).",
"as a confounder have been scattered across many different communities, this review paper aims to gather and unify existing approaches and to concurrently serve three different types of researchers and their respective goals: For applied practitioners, we collect and categorize applications with text as a causal confounder (Table 1 and 2), and we provide a flow-chart of analysts' decisions for this problem setting (Fig. 2).",
"For causal inference researchers working with text data, we highlight recent work in representation learning in NLP ( 4) and caution that this is still an open research area with questions of the sensitivity of effects to choices in representation.",
"We also outline existing interpretable evaluation methods for adjustments of text as a causal confounder ( 6).",
"For NLP researchers working with causal inference, we summarize some of the most-used causal estimators that condition on confounders: matching, propensity score weighting, regression adjustment, doubly-robust methods, and causally-driven representation learning ( 5).",
"We also discuss evaluation of methods with constructed observational studies and semi-synthetic data ( 7).",
"In Table 1, we gather and summarize applications that use text to adjust for potential confounding.",
"This encompasses both",
"(a) text as a surrogate for confounders, or",
"(b) the language itself as confounders.",
"2 As an example, consider Kiciman et al. (2018) where the goal is to estimate the size of the causal effect of alcohol use (treatment) on academic success (outcome) for college students.",
"Since randomly assigning college students to binge drink is not feasible or ethical, the study instead uses observational data from Twitter, which also has the advantage of a large sample size of over sixty-three thousand students.",
"They use heuristics to identify 2 We acknowledge that Table 1 is by no means exhaustive.",
"To construct Table 1, we started with three seed papers: Roberts et al. (2020), Veitch et al. (2019), and Wood-Doughty et al. (2018).",
"We then examined papers cited by these papers, papers that cited these papers, and papers published by the papers' authors.",
"We repeated this approach with the additional papers we found that adjusted for confounding with text.",
"We also examined papers matching the query causal or causality in the ACL Anthology.",
"the Twitter accounts of college-age students and extract alcohol mentions and indicators of college success (e.g., study habits, risky behaviors, and emotions) from their Twitter posts.",
"They condition on an individual's previous posts (temporally previous to measurements of treatment and outcome) as confounding variables since they do not have demographic data.",
"They represent text as word counts and use stratified propensity score matching to adjust for the confounding bias.",
"The study finds the effects of alcohol use include decreased mentions of study habits and positive emotions and increased mentions of potentially risky behaviors.",
"Text as a surrogate for confounders.",
"Traditionally, causal research that uses human subjects as the unit of analysis would infer demographics via surveys.",
"However, with the proliferation of the web and social media, social research now includes large-scale observational data that would be challenging to obtain using surveys (Salganik, 2017).",
"This type of data typically lacks demographic information but may contain large amounts of text written by participants from which demographics can be extracted.",
"In this space, some researchers are specific about the confounders they want to extract such as an individual's ideology (Sridhar and Getoor, 2019) or mood (Sridhar et al., 2018).",
"Other researchers condition on all the text they have available and assume that low-dimensional summaries capture all possible confounders.",
"For example, researchers might assume that text encodes all possible confounders between alcohol use and college success (Kiciman et al., 2018) or psychiatric medication and anxiety (Saha et al., 2019).",
"We dissect and comment on this assumption in Section 8.",
"Open problems: NLP systems have been shown to be inaccurate for low-resource languages (Duong et al., 2015), and exhibit racial and gender disparity (Blodgett and O'Connor, 2017; Zhao et al., 2017).",
"Furthermore, the ethics of predicting psychological indicators, such as mental health status, from text are questionable (Chancellor et al., 2019).",
"It is unclear how to mitigate these disparities when trying to condition on demographics from text and how NLP errors will propagate to causal estimates.",
"Language as confounders.",
"There is growing interest in measuring language itself (e.g. the sentiment or topical content of text) as causal confounders.",
"For example, Roberts et al. (2020) examine how the perceived gender of an author affects the number of citations that an article receives.",
"However, an article's topics (the confounders) are likely to influence the perceived gender of its author (reflecting an expectation that women write about certain topics) and the number of citations of that article (hotter topics will receive more Figure 2: This chart is a guide to design decisions for applied research with causal confounders from text.",
"citations).",
"Other domains that analyze language as a confounder include news (Johansson et al., 2016), social media (De Choudhury et al., 2016; Olteanu et al., 2017), and loan descriptions (Pham and Shen, 2017).",
"See Section 4 for more discussion on the challenges and open problems of inferring these latent aspects of language.",
"Two predominant causal inference frameworks are structural causal models (SCM) (Pearl, 2009b) and potential outcomes (Rubin, 1974, 2005), which are complementary and theoretically connected (Pearl, 2009b; Richardson and Robins, 2013; Morgan and Winship, 2015).",
"While their respective goals substantially overlap, methods from structural causal models tend to emphasize conceptualizing, expressing, and reasoning about the effects of possible causal relationships among variables, while methods from potential outcomes tend to emphasize estimating the size or strength of causal effects.",
"In the ideal causal experiment, for each each unit of analysis, i (e.g., a person), one would like to measure the outcome, y i (e.g., an individual's life expectancy), in both a world in which the unit received treatment, t i = 1 (e.g., the person smoked), as well as in the counterfactual world in which the same unit did not receive treatment, t i = 0 (e.g the same person did not smoke).",
"3 A fundamental challenge of causal inference is that one cannot simultaneously observe treatment and non-treatment for 3 In this work, we only address binary treatments, but multi-value treatments are also possible (e.g., Imbens (2000)).",
"The most common population-level estimand of interest is the average treatment effect (ATE) .",
"4 In the absence of confounders, this is simply the difference in means between the treatment and control groups, = E ( y i | t i = 1) E ( y i | t i = 0) , and the unadjusted or naive estimator is naive = 1 n 1 (cid:88) i : t i =1 y i 1 n 0 (cid:88) j : t j =0 y j (1) where n 1 is the number of units that have received treatment and n 0 is the number of units that have not received treatment.",
"However, this equation will be biased if there are confounders, z i , that influence both treatment and outcome.",
"Structural causal models (SCMs) use a graphical formalism that depicts nodes as random variables and directed edges as the direct causal dependence between these variables.",
"The typical estimand of choice for SCMs is the probability distribution of an outcome variable Y given an intervention on a treatment variable T : P ( Y | do ( T = t )) (2) in which the do -notation represents intervening to set variable T to the value t and thereby removing all incoming arrows to the variable T .",
"Identification.",
"In most cases, Equation 2 is not equal to the ordinary conditional distribution 4 Other estimands include the average treatment effect on the treated (ATT) and average treatment effect on the control (ATC) (Morgan and Winship, 2015) Figure 3: A causal diagram showing common causal relationships.",
"P ( Y | T = t ) since the latter is simply filtering to the sub-population and the former is changing the underlying data distribution via intervention.",
"Thus, for observational studies that lack intervention, one needs an identification strategy in order to represent P ( Y | do ( T = t )) in terms of distributions of observed variables.",
"One such identification strategy (assumed by the applications throughout this review) is the backdoor criterion which applies to a set of variables, S , if they",
"(i) block every backdoor path between treatment and outcome, and",
"(ii) no node in S is a descendant of treatment.",
"Without positive identification, the causal effects cannot be estimated and measuring variables from text is a secondary concern.",
"Drawing the causal graph.",
"Causal graphs help clarify which variables should and should not be conditioned on.",
"The causal graphs in Figure 3 illustrate how the direction of the arrows differentiates confounder, collider, and mediator variables.",
"Identifying the differences in these variables is crucial since, by d-separation , conditioning on a confounder will block the treatment-confounder-outcome path, removing bias.",
"By contrast, conditioning on a collider can create dependence between treatment-collider-outcome 5 (Pearl, 2009a) potentially introducing more bias (Montgomery et al., 2018; Elwert and Winship, 2014).",
"Mediator variables require a different set of adjustments than confounders to find the natural direct effect between treatment and outcome (VanderWeele, 2015; Pearl, 2014).",
"A practitioner typically draws a causal graph by explicitly encoding theoretical and domain assumptions as well as the results of prior 5 In Pearl et al. (2016)'s example of a collider, suppose scholarships at a college are only given to two types of students: those with unusual musical talents and high grade point averages.",
"In the general population, musical and academic talent are independent.",
"However, if one discovers a person is on a scholarship (conditioning on the collider) then knowing a person lacks musical talent tells us that they are extremely likely to have a high GPA.",
"data analyses.",
"6 Open Problems: When could text potentially encode confounders and colliders simultaneously?",
"If so, is it possible to use text to adjust exclusively for confounders?",
"After drawing the causal graph, the next step is to use available text data to recover latent confounders.",
"Some approaches pre-specify the confounders of interest and measure them from text, P ( z | x ) .",
"Others learn confounders inductively and use a low-dimensional representation of text as the confounding variable z in subsequent causal adjustments.",
"Pre-specified confounders.",
"When a practitioner can specify confounders they want to measure from text (e.g., extracting occupation from text in our smoking example), they can use either (1) lexicons or (2) trained supervised classifiers as the instrument of measurement.",
"Lexicons are word lists that can either be hand-crafted by researchers or taken off-the-shelf.",
"For example, Saha et al. (2019) use categories of the Linguistic Inquiry and Word Count (LIWC) lexicon (Pennebaker et al., 2001) such as tentativeness, inhibition, and negative affect, and use indicators of these categories in the text as confounders.",
"Trained supervised classifiers use annotated training examples to predict confounders.",
"For instance, Saha et al. (2019) also build machine learning classifiers for users' mental states (e.g., depression and anxiety) and apply these classifiers on Twitter posts that are temporally prior to treatment.",
"If these classifiers accurately recover mental states and there are no additional latent confounders, then conditioning on the measured mental states renders treatment independent of potential outcomes.",
"Open problems: Since NLP methods are still far from perfectly accurate, how can one mitigate error that arises from approximating confounding variables?",
"Closely related to this question is effect restoration which addresses error from using proxy variables (e.g., a father's occupation) in place of true confounders (e.g, socioeconomic status) (Kuroki and Pearl, 2014; Oktay et al., 2019).",
"Wood-6 See Morgan and Winship (2015) pgs.",
"33-34 on both the necessity and difficulty of specifying a causal graph for applied social research.",
"Time-ordering can be particularly helpful when encoding causal relationships (for instance, there cannot be an arrow pointing from variable A to variable B if B preceded A in time).",
"Doughty et al. (2018) build upon effect restoration for causal inference with text classifiers, but there are still open problems in accounting for error arising from other text representations and issues of calibration (Nguyen and O'Connor, 2015) and prevalence estimation (Card and Smith, 2018; Keith and O'Connor, 2018) in conjunction with NLP.",
"Ideas from the large literature on measurement error models may also be helpful (Fuller, 1987; Carroll et al., 2006; Buonaccorsi, 2010).",
"Inductively derived confounders.",
"Other researchers inductively learn confounders in order to condition on all aspects of text, known and unknown.",
"For example, some applications condition on the entirety of news (Johansson et al., 2016) or scientific articles (Veitch et al., 2019; Roberts et al., 2020).",
"This approach typically summarizes textual information with text representations common in NLP.",
"Ideally, this would encode all aspects of language (meaning, topic, style, affect, etc.), though this is an extremely difficult, open NLP problem.",
"Typical approaches include the following.",
"(1) Bag-of-words representations discard word order and use word counts as representations.",
"(2) Topic models are generative probabilistic models that learn latent topics in document collections and represent documents as distributions over topics (Blei et al., 2003; Boyd-Graber et al., 2014; Roberts et al., 2014).",
"(3) Embeddings are continuous, vector-based representations of text.",
"To create vector representations of longer texts, off-the-shelf word embeddings such as word2vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) or combined via variants of weighted averaging (Arora et al., 2017) or neural models (Iyyer et al., 2015; Bojanowski et al., 2017; Yang et al., 2016).",
"(4) Recently, fine-tuned, large-scale neural language models such as BERT (Devlin et al., 2019) have achieved state-of-the-art performance on semantic benchmarks, and are now used as text representations.",
"Each of these text representations is a real-valued vector that is used in place of the confounder, z , in a causal adjustment method ( 5) Open problems: Estimates of causal effects are contingent on the garden of forking paths of data analysis, meaning any paths an analyst did not take could have resulted in different conclusions (Gelman and Loken, 2013).",
"For settings with causal confounders from text, the first fork is the choice of representation (e.g., topic models or embeddings) and the second fork is the pre-processing and hyperparameter decisions for the chosen representations.",
"We highlight that these decisions have been shown to alter results in predictive tasks.",
"For instance, studies have shown that pre-processing decisions dramatically change topic models (Denny and Spirling, 2018; Schofield et al., 2017); embeddings are sensitive to hyperparameter tuning (Levy et al., 2015) and the construction of the training corpus (Antoniak and Mimno, 2018); and fine-tuned language model performance is sensitive to random restarts (Phang et al., 2018).",
"Thus, reporting sensitivity analysis of the causal effects from these decisions seems crucial: how robust are the results to variations in modeling specifications?",
"Given a set of variables Z that satisfy the backdoor criterion ( 3.2), one can use the backdoor adjustment to estimate the causal quantity of interest,",
"Conditioning on all confounders is often impractical in high-dimensional settings such as those found in natural language.",
"We provide an overview of methods used by applications in this review that approximate such conditioning, leading to unbiased estimates of treatment effect; however, we acknowledge this is not an exhaustive list of methods and direct readers to more extensive guides (Morgan and Winship, 2015; Athey et al., 2017).",
"Open problems: Causal studies typically make an assumption of overlap , also known as common support or positivity , meaning that any individual has a non-zero probability of assignment to each treatment condition for all possible values of the covariates: z, 0 < P ( T = 1 | Z = z ) < 1 .",
"D'Amour et al. (2017) show that as the dimensionality of covariates grows, strict overlap converges to zero.",
"What are the implications of these results for high-dimensional text data?",
"A propensity score estimates the conditional probability of treatment given a set of possible confounders (Rosenbaum and Rubin, 1984, 1983; Caliendo and Kopeinig, 2008).",
"The true model of treatment assignment is typically unknown so one must estimate the propensity score from data (e.g., from a logistic regression model), P ( T = 1 | Z ) .",
"(4) Inverse Probability of Treatment Weighting (IPTW) assigns a weight to each unit based on the propensity score (Lunceford and Davidian, 2004), w i = t i / i + (1 t i ) / (1 i ) , (5) thus emphasizing, for example, treated units that were originally unlikely to be treated ( t i = 1 , low i ).",
"The ATE is calculated with weighted averages between the treatment and control groups, 7 IPTW = 1 n 1 (cid:88) i : t i =1 w i y i 1 n 0 (cid:88) j : t j =0 w j y j (6) 5.2 Matching and stratification Matching aims to create treatment and control groups with similar confounder assignments; for example, grouping units by observed variables (e.g., age, gender, occupation), then estimating effect size within each stratum (Stuart, 2010).",
"Exact matching on confounders is ideal but nearly impossible to obtain with high-dimensional confounders, including those from text.",
"A framework for matching with text data is described by Mozer et al. (2020) and requires choosing: (1) a text representation ( 4); (2) a distance metric (co-sine, Eucliean, absolute difference in propensity score etc.); and (3) a matching algorithm.",
"As Stuart (2010) describes, the matching algorithm involves additional decisions about",
"(a) greedy vs. optimal matching;",
"(b) number of control items per treatment item;",
"(c) using calipers (thresholds of maximum distance); and",
"(d) matching with or without replacement.",
"Coarsened exact matching (CEM) matches on discretized raw values of the observed confounders (Iacus et al., 2012).",
"Instead of directly matching on observed variables, stratified propensity-score matching partitions propensity scores into intervals (strata) and then all units are compared within a single strata (Caliendo and Kopeinig, 2008).",
"Stratification is also known as interval matching, blocking, and subclassification.",
"Once the matching algorithm is implemented, counterfactuals (estimated potential outcomes) are obtained from the matches M i for each unit i : y i ( k ) = (cid:40) y i if t i = k 1 |M i | (cid:80) j M i y j if t i (cid:54) = k (7) 7 Lunceford and Davidian (2004) note there are two versions of IPTW, where both the weighted sum and the raw count have been used for the n 0 and n 1 denominators.",
"Open problems: Ho et al. (2007) describe matching as a method to reduce model dependence because, unlike regression, it does not rely on a pa-rameteric form.",
"Yet, estimated causal effects may still be sensitive to other matching method decisions such as the number of bins in coarsened exact matching, the number of controls to match with each treatment in the matching algorithm, or the choice of caliper.",
"Are causal estimates made using textual covariates particularly sensitive or robust to such choices?",
"Regression adjustment fits a supervised model from observed data about the expected conditional outcomes q ( t, z ) E ( Y | T = t, Z = z ) (9)",
"Unlike methods that model only treatment (IPTW) or only outcome (regression adjustment), doubly robust methods model both treatment and outcome, and have the desirable property that if either the treatment or outcome models are unbiased then the effect estimate will be unbiased as well.",
"These methods often perform very well in practice (Dorie et al., 2019).",
"Adjusted inverse probability of treatment weighting (A-IPTW) combines estimated propensity scores (Eqn. 4) and conditional outcomes (Eqn. 9), while the more general targeted maximum likelihood estimator (TMLE) updates the conditional outcome estimate with a regression on the propensity weights (Eqn. 5) and q (Van der Laan and Rose, 2011).",
"Several research efforts design representations of text specifically for causal inference goals.",
"These 8 For alternative matching estimators see Abadie et al. (2004).",
"This estimator is techinally the sample average treatment effect (SATE), not the population-level ATE, since we have pruned treatment and control pairs that do not have matches (Morgan and Winship, 2015).",
"approaches still initialize their models with representations of text described in Section 4, but then the representations are updated with machine learning architectures that incorporate the observed treatment assignment and other causal information.",
"Johansson et al. (2016) design a network with a multitask objective that aims for low prediction error for the conditional outcome estimates, q , and minimizes the discrepancy distance between q (1 , z i ) and q (0 , z i ) in order achieve balance in the confounders.",
"Roberts et al. (2020) combine structural topic models (STM; Roberts et al. (2014)), propensity scores, and matching.",
"They use the observed treatment assignment as the content covariate in the STM, append an estimated propensity score to the topic-proportion vector for each document, and then perform coarsened exact matching on that vector.",
"Veitch et al. (2019) fine-tune a pre-trained BERT network with a multi-task loss objective that estimates",
"(a) the original masked language-modeling objective of BERT,",
"(b) propensity scores, and",
"(c) conditional outcomes for both treatment and control.",
"They use the predicted conditional outcomes and propensity scores in regression adjustment and the TMLE formulas.",
"Open problems: These methods have yet to be compared to one another on the same benchmark evaluation datasets.",
"Also, when are the causal effects sensitive to hyperparameter and network architecture choices and what should researchers do in these settings?",
"Text data has the advantage of being interpretable matched pairs and some low-dimensional representations of text can be read by humans to evaluate their quality.",
"When possible, we suggest practitioners use (1) interpretable balance metrics and/or (2) human judgements of treatment propensity to evaluate intermediate steps of the causal estimation pipeline.",
"For matching and propensity score methods, the confounder balance should be assessed, since ideally P ( Z | T = 1) = P ( Z | T = 0) in a matched sample (Stuart, 2010).",
"A standard numerical balance diagnostic is the standardized difference in means (SDM) , SDM ( j ) = 1 n 1 (cid:80) i : t i =1 z ij 1 n 0 (cid:80) i : t i =0 z ij t =1 j where z ij is a single confounder j for a single unit i and t =1 j is the standard deviation of z ij for all i such that t i = 1 .",
"SDM can also be used to evaluate the propensity score, in which case there would only be a single j (Rubin, 2001).",
"For causal text applications, Roberts et al. (2020) and Sridhar and Getoor (2019) estimate the difference in means for each topic in a topic-model representation of confounders and Sridhar et al. (2018) estimate the difference in means across structured covariates but not the text itself.",
"As an alternative to SDM, Roberts et al. (2020) use string kernels to perform similarity checks.",
"Others use domain-specific, known structured confounders to evaluate the balance between treatment and control groups.",
"For instance, De Choudhury and Kiciman (2017) sample treatment-control pairs across all propensity score strata and label the sampled text based on known confounders (in their case, from a previously-validated codebook of suicidal ideation risk markers).",
"Open problems: For embeddings and causally-driven representations, each dimension in the confounder vector z is not necessarily meaningful.",
"How can balance metrics be used in this setting?",
"When possible, one can also improve validation by evaluating matched items (posts, sentences, documents etc.) to humans for evaluation.",
"Humans can either",
"(a) use a scale (e.g., a 1-5 Likert scale) to rate items individually on their propensity for treatment, or",
"(b) assess similarity of paired items after matching.",
"A simple first step is for analysts to do in-house evaluation on a small sample (e.g., Roberts et al. (2020)), but a larger-sample experiments on crowd-working platforms can also increase the validity of these methods (e.g., Mozer et al. (2020)).",
"Open problems: How can these human judgement experiments be improved and standardized?",
"Future work could draw from a rich history in NLP of evaluating representations of topic models and embeddings (Wallach et al., 2009; Bojanowski et al., 2017; Schnabel et al., 2015) and evaluating semantic similarity (Cer et al., 2017; Bojanowski et al., 2017; Reimers and Gurevych, 2019).",
"Because the true causal effects in real-world causal inference are typically unknown, causal evaluation is a difficult and open research question.",
"As algorithmic complexity grows, the expected performance of causal methods can be difficult to estimate theoretically (Jensen, 2019).",
"Other causal evaluations involve synthetic data .",
"However, as Gentzel et al. (2019) discuss, synthetic data has no unknown unknowns and many researcher degrees of freedom, which limits their effectiveness.",
"Thus, we encourage researchers to evaluate with constructed observational studies or semi-synthetic datasets , although measuring latent confounders from text increases the difficulty of creating realistic datasets that can be used for empirical evaluation of causal methods.",
"Constructed observational studies collect data from both randomized and non-randomized experiments with similar participants and settings.",
"Evaluations of this kind include job training programs in economics (LaLonde, 1986; Glynn and Kashin, 2013), advertisement marketing campaigns (Gordon et al., 2019), and education (Shadish et al., 2008).",
"For instance, Shadish et al. (2008) randomly assign participants to a randomized treatment (math or vocabulary training) and non-randomized treatment (participants choose their own training).",
"They compare causal effect estimates from the randomized study with observational estimates that condition on confounders from participant surveys (e.g., sex, age, marital status, like of mathematics, extroversion, etc.).",
"Open problems: To extend constructed observational studies to text data, one could build upon Shadish et al. (2008) and additionally",
"(a) ask participants to write free-form essays of their past educational and childhood experiences and/or",
"(b) obtain participants' public social media posts.",
"Then causal estimates that condition on these textual representation of confounders could be compared to both those with surveys and the randomized settings.",
"Alternatively, one could find observational studies with both real covariates and text and (1) randomize treatment conditional on the propensity score model (constructed from the covariates but not the text) and (2) estimate causal effect given only text (not the covariates).",
"Then any estimated non-zero treatment effect is only bias.",
"Semi-synthetic datasets use real covariates and synthetically generate treatment and outcome, as in the 2016 Atlantic Causal Inference Competition",
"(Dorie et al., 2019).",
"Several applications in this review use real metadata or latent aspects of text to simulate treatment and outcome: Johansson et al. (2016) simulate treatment and outcome from two centroids in topic model space from newswire text; Veitch et al. (2019) use indicators of an article's buzzy keywords; Roberts et al. (2020) use quan-titative methodology categories of articles that were hand-coded by other researchers.",
"Open problems: Semi-synthetic datasets that use real covariates of text seem to be a better evaluation strategy than purely synthetic datasets.",
"However, with semi-synthetic datasets, researchers could be inadvertently biased to choose metadata that they know their method will recover.",
"A promising future direction is a competition-style evaluation like Dorie et al. (2019) in which one group of researchers generates a causal dataset with text as a confounder and other groups of researchers evaluate their causal methods without access to the data-generating process.",
"Computational social science is an exciting, rapidly expanding discipline.",
"With greater availability of text data, alongside improved natural language processing models, there is enormous opportunity to conduct new and more accurate causal observational studies by controlling for latent confounders in text.",
"While text data ought to be as useful for measurement and inference as traditional low-dimensional social-scientific variables, combining NLP with causal inference methods requires tackling major open research questions.",
"Unlike predictive applications, causal applications have no ground truth and so it is difficult distinguish modeling errors and forking paths from the true causal effects.",
"In particular, we caution against using all available text in causal adjustment methods without any human validation or supervision, since one cannot diagnose any potential errors.",
"Solving these open problems, along with the others presented in this paper, would be a major advance for NLP as a social science methodology.",
"The authors thank Sam Witty, Jacob Eisenstein, Brandon Stewart, Zach Wood-Doughty, Andrew Halterman, Laura Balzer, and members of the University of Massachusetts Amherst NLP reading group for helpful feedback, as well as the anonymous referees for detailed peer reviews."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"other"
] |
[
"Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.",
"Evidence has shown that they are overparameter-ized; attention heads can be pruned without significant performance loss.",
"In this work, we instead reallocate themthe model learns to activate different heads on different inputs.",
"Drawing connections between multi-head attention and mixture of experts, we propose the m ixture of a ttentive e xperts model (MAE).",
"MAE is trained using a block coordinate descent algorithm that alternates between updating (1) the responsibilities of the experts and (2) their parameters.",
"Experiments on machine translation and language modeling show that MAE outperforms strong baselines on both tasks.",
"Particularly, on the WMT14 English to German translation dataset, MAE improves over transformer-base by 0.8 BLEU, with a comparable number of parameters.",
"Our analysis shows that our model learns to specialize different experts to different inputs.",
"1 1 Introduction The transformer architecture and its variants achieve state-of-the-art performance across a variety of NLP tasks, including machine translation (Vaswani et al., 2017; Ott et al., 2018), language modeling (Radford et al., 2018; Baevski and Auli, 2019), semantic role labeling (Strubell et al., 2018), and more (Devlin et al., 2019; Liu et al., 2019b; Yang et al., 2019b).",
"Under the hood, multihead attention provides the driving force: multiple separately parameterized attention functions act in parallel to contextualize the input representations; their outputs are then gathered by an affine transformation, and fed to onward computation.",
"Recent efforts by Voita et al. (2019) and Michel et al. (2019) suggest that typical transformer networks are overparameterized, in the sense that at test time, many of the heads, or even a full layer (Fan et al., 2020), can be removed without significant loss in performance.",
"2 In response to this observation, they propose to prune the unimportant attention heads in the model after it is trained, aiming for faster inference.",
"In this paper, we ask whether, instead of reducing the model capacity, we can use it more effectively.",
"We propose m ixture of a ttentive e xperts (MAE).",
"MAE retains all attention heads, and learns to activate different heads on different inputs (see illustration in Figure 1).",
"We start by showing that multi-head attention can be seen as an uniform, input-agnostic mixture of experts (Ja-cobs et al., 1991), by grouping a subset of atten-2 We do not argue that overparameterization is bad for training.",
"In fact, it may be necessary for successful optimization and good generalization (Neyshabur et al., 2014; Zhang et al., 2016; Soudry and Carmon, 2016, inter alia ).",
"Rather, we try to explore more efficient ways to use the modeling capacity, than, e.g., removing part of the model.",
"tion heads as an expert ( 2.2).",
"We then introduce MAE, which instead of uniformly weighting the experts, complements the experts with a learned, input-dependent function that assigns their responsibilities ( 2.3).",
"To train MAE, we propose a two-step algorithm based on block coordinate descent ( 3), which alternates between updating the experts' responsibilities and their parameters.",
"We evaluate MAE on machine translation and language modeling ( 4).",
"Our approach outperforms strong baselines on both; on the WMT14 English to German MT dataset, MAE outperforms transformer-base (Vaswani et al., 2017) by 0.8 BLEU with a negligible increase in the number parameters.",
"Our analysis shows that MAE learns to encourage different experts to specialize on different inputs ( 5).",
"This section describes MAE in detail.",
"It is inspired by a mixture-of-experts view of multi-head attention, which we present in 2.2.",
"Specifically, we show that multi-head attention can be viewed as a mixture of uniformly weighted experts, each consisting of a subset of attention heads.",
"Based on this observation, we propose MAE, which learns to weight the experts ( 2.3) depending on the input.",
"We begin by laying out notation and necessary background in 2.1.",
"Mixture of experts is a well-established technique for ensemble learning (Jacobs et al., 1991).",
"It jointly trains a set of expert models { f i } ki =1 that are intended to specialize across different input cases.",
"The outputs produced by the experts are aggregated by a linear combination, with a gating function g = [ g 1 , . . . , g k ] determining the importance of each expert in the final decision: MoE( x ) = k (cid:88) i =1 g i ( x ) f i ( x ) .",
"Multi-head attention is the key building block for the state-of-the-art transformer architectures (Vaswani et al., 2017).",
"At its core are multiple separately parameterized attention heads.",
"An attention head takes as input a n -byd matrix X , with each row being the vector representation of an input element.",
"It contextualizes the input using a dot-product attention mechanism: (cid:101) H i = softmax (cid:16) XQ i K (cid:62) i X (cid:62) (cid:17) XV i , (2) where Q i , K i , and V i are learned matrices, 3 and the softmax normalizes row-wise.",
"The outputs of attention heads are then concatenated and fed through a learned affine transformation: Z (cid:44) MultiHead ( X ) = (cid:104) (cid:101) H 1 ; . . . ; (cid:101) H h (cid:105) W (3) where W is a learned matrix, and h denotes the number of attention heads.",
"We now present a different computation equivalent to Eq.",
"3, aiming for a smoother transition into following sections.",
"Let H i = (cid:101) H i W i , where W i is a block submatrix of W , i.e., W = [ W (cid:62) 1 ; W (cid:62) 2 , . . . ; W (cid:62) h ] (cid:62) .",
"Then Z = (cid:104) (cid:101) H 1 ; . . . ; (cid:101) H h (cid:105) W = h (cid:88) i =1 H i .",
"Eq.",
"4 provides a different view of the output computation of the multi-head attention: each attention head first projects the contextualized representation with a learned matrix (i.e., H i = (cid:101) H i W i ), then their outputs are gathered with a sum (Eq. 4).",
"We now show that this can be seen as a uniformly weighted mixture of experts.",
"param-3 Some authors explicitly distinguish queries, keys, and values (Vaswani et al., 2017).",
"These inputs can sometimes differ, e.g., in encoder-decoder attention.",
"We suppress such differences for clarity.",
"eters.",
"f i ( ; i ) is a parameterized function of the input, which calculates a sum of the outputs by all but the i th attention head.",
"This is achieved by subtracting H i from (cid:80) hj =1 H j , then scaling up the results by h/ ( h 1) .",
"The experts share part of the parameters: any two share h 2 attention heads.",
"A uniform responsibility of 1 /h is used.",
"Discussion.",
"Viewing multi-head attention through this MoE lens suggests some interesting consequences.",
"One can replace the input-agnostic responsibility in Eq.",
"5 with a function over the input.",
"Indeed, we have good reasons for doing so.",
"Voita et al. (2019) and Michel et al. (2019) show that for transformer networks, a handful of important attention heads are sufficient to achieve good test-time performance.",
"They propose to prune the rest using an input-agnostic procedure.",
"Instead of doing so, here we see a potential alternative: keep all the heads, but only activate those that are important to the input.",
"This motivates MAE, which we now introduce.",
"MAE is inspired by the connections between MoE and multi-head attention we draw in 2.2.",
"On top of multi-head attention, MAE learns an input-dependent parameterized gating function g ( ; ) to complement the experts.",
"More formally, the uniform responsibility 1 /h in Eq.",
"5 is replaced by g ( ; ) : given input X , MAE outputs h (cid:88) i =1 g i ( X ; ) f i ( X ; i ) .",
"Experts f i are the same as those in Eq.",
"5.",
"g ( ; ) is parameterized with a multi-layer per-ceptron (MLP) followed by a softmax .",
"It first averages X along the row (i.e., the sequence di-rection), and then feeds the results through a two-layer tanh -MLP.",
"g ( ; ) outputs a normalized h dimensional vector using a softmax , indicating the responsibilities of the experts.",
"It can be seen as a learned probability distribution over the experts.",
"MAE can learn to assign more responsibility to the experts that are more important to the given input, allowing them to contribute more.",
"MAE is applicable wherever multi-head attention is used.",
"For example, in a machine translation experiment ( 4.2), we replace with MAE all the multi-head attention in a transformer network, including the self-attention in all encoder and decoder layers, as well as those attending over the encoded source from the decoder.",
"Each of them is separately treated as a mixture of experts, and has its own gating function.",
"The additional parameter overhead is small: gating functions account for only 35% parameters of the full model (Appendix A).",
"It is straightforward to jointly train the experts and the gating functions in an MAE model using backpropagation.",
"However, in line with previous observations (Shen et al., 2019), we empirically observe that this is prone to degenerate solutions where the gating functions tend to learn to similarly weight the experts (see 5.1).",
"4 As a remedy, we propose a block coordinate descent (BCD) training.",
"At a high level, training is decomposed into two interleaving steps: A G step updates the gating function g ( ; ) , fixing the experts; an F step fixes the gating function and updates one randomly selected expert f i ( ; i ) .",
"5 The computations for G and F steps differ: In a G step, MAE outputs a linear combination of the experts' outputs, and only updates the gating function's parameters (Algo-rithm 1).",
"No expert is updated.",
"An F step computes the experts' responsibilities g ( X ) , according to which an expert i is then sampled (Algorithm 2).",
"MAE computes the output with f i , which is then updated, without updating the gating function or other experts.",
"6 A non-differentiable sampling from g is involved in F steps.",
"4 Besides the undesired degeneracy, we also find that the model suffers worse overfitting when and are jointly updated (Appendix B).",
"One possible reason is that, compared to the standard multi-head attention, the learned gates give the model additional capacity to compensate for the experts' errors with others' outputs at training time, hurting generalization (Jacobs et al., 1991).",
"Another common degeneracy of MoEs is the rich get richer where one of the experts is always picked and others ignored.",
"As observed by Voita et al. (2019), this can happen when the experts are trained to be sparsely weighted.",
"When tuning the hyperparameters, we observe the rich get richer degeneracy if the learning rate is set too large.",
"5 For clarity, our discussion focuses on and .",
"The rest of the model, e.g., the word embeddings in a transformer network, are updated along with .",
"Training aims to minimize loss L over { , } .",
"6 In mini-batch training, which we use in the experiments, different experts can be sampled for different instances in a mini-batch.",
"This is because g depends on the inputs.",
"This means that multiple experts will be updated in an F step, but each due to a subset of the examples in the mini-batch.",
"backpropagation, since an F step never calculates the gradients w.r.t. .",
"At test time, the computation is the same as that in a G step, i.e., MAE outputs a linear combination of the experts, weighted by g .",
"Training time overhead.",
"A straightforward training procedure is to, for each training instance, first take a G step, and then an F step.",
"This doubles the forward propagation computation overhead.",
"In practice, it is not necessary to take G steps as frequently as F steps, since they only update a small portion of the model.",
"In the experiments, we take G steps one fifth as frequently as F steps: we make G updates every 5 epochs while always take F steps.",
"In preliminary experiments, we find this reduces training time overhead without significant impact on the performance.",
"7 Algorithm 3 summarizes the block coordinate descent training in a given epoch.",
"Connections to dropout.",
"In the above block coordinate descent training algorithm, an F step samples an expert to update, and ignores the rest in both forward and backward computation.",
"It is reminiscent of dropout (Srivastava et al., 2014).",
"Specifically, selecting expert f i is equivalent to 7 In this way, training time for MAE is roughly 1.2 times longer than that of the transformer network it builds on.",
"Algorithm 3 Block coordinate descent (BCD) training for MAE, at epoch e .",
"D denotes the training data.",
"8 1: procedure BCD ( D = { X i } i , e ) 2: for X i D do 3: (cid:46) Take G steps every 5 epochs.",
"dropping head i .",
"9 In other words, the F steps (Al-gorithm",
"2) can be seen as a structured dropout applied to the attention heads, but with learned input-dependent drop probabilities.",
"When g is a constant vector with elements 1 /h , it recovers the head dropout, which is also explored by concurrent work (Fan et al., 2020).",
"So far, we view MAE as a mixture of h experts, each consisting of h 1 attention heads.",
"One can, of course, generalize this to other settings, e.g., mixing (cid:0) hh 2 (cid:1) experts, each containing h 2 heads.",
"From the dropout view, this translates to dropping more attention heads: dropping t heads out of h is equivalent to applying a dropout with drop probability t/h , in the sense that their expected numbers of dropped units are the same.",
"Despite the similarity between MAE and dropout, a key difference exists between the two: with the latter, the constant dropout probability is set a priori , while MAE uses a gating function g ( ; ) to calculate a learned, input-dependent dropout probability.",
"We empirically evaluate MAE on machine translation ( 4.2) and language modeling ( 4.3) benchmarks.",
"We first introduce the compared models ( 4.1).",
"MAE is evaluated under two settings:",
"9 Recall from Eq.",
"5 that f i includes all but head i .",
"MAE -6 is similar to MAE -7, but mixes (cid:0) 82 (cid:1) = 28 experts each with 6 attention heads.",
"10 We compare MAE to the following baselines.",
"BASE is a sequence-to-sequence model based on the transformer architecture.",
"NOBCD is the same model as MAE, but does not use block coordinate descent training.",
"Instead, it jointly updates all experts and the gating function at training time, as discussed at the start of 3.",
"UNI-MAE -7 is similar to MAE but does not have parameterized gating functions.",
"It builds on BASE , and mixes 8 experts, each with 7 attention heads.",
"Constant uniform responsibilities are assigned to the experts.",
"At each training step, it updates one uniformly sampled expert; at test time, the outputs of all experts are averaged according to Eq.",
"5.",
"UNI-MAE -6 mixes 28 6-attention-head experts, and is otherwise the same as UNIMAE -7.",
"Datasets.",
"We experiment with two machine translation datasets: WMT14 EN-DE (Bojar et al., 2014).",
"11 Following previous practice (Vaswani et al., 2017) we train on WMT14, and designate newstest2013 and newstest2014 as development and test data respectively.",
"Our preprocessing follows that of Vaswani et al. (2017) and Ott et al. (2018).",
"A shared source-target vocabulary is used, with 32k byte pair encoding types (BPE; Sennrich et al., 2016).",
"IWSLT14 DE-EN (Cettolo et al., 2014).",
"12 It is based on TED talks, and is much smaller compared to WMT14.",
"We use the preprocessing from Edunov et al. (2018).",
"Following previous practice, we use separate vocabularies for the source and target, with around 9K and 7K BPE types respectively.",
"10 Preliminary results show that mixing experts with fewer heads leads to underwhelming performance.",
"We conjecture this is due to too strong a regularization effect ( 3).",
"11 https://drive.google.com/a/",
"haopeng.name/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8 12 http://workshop2014.iwslt.org/ .",
"Evaluation.",
"The models are evaluated using BLEU (Papineni et al., 2002).",
"A beam search with beam size 5 is used.",
"In the WMT14 experiments, we follow Vaswani et al. (2017), and apply a compound split postprocessing.",
"13 Results.",
"Table 2 summarizes WMT14 EN-DE translation test performance.",
"The base and large sized transformer models are due to Vaswani et al. (2017).",
"To control for compounding factors, we additionally compare to our implementation of the base sized model (BASE ).",
"It achieves slightly better performance than Vaswani et al. (2017), with a 0.3 BLEU edge.",
"MAE -7 improves over the base transformer by 0.8 BLEU, obtaining similar performance to the large-size transformer of Vaswani et al. (2017) using less than a third as many parameters.",
"Since we do not see similar improvement by UNI-MAE -7, we attribute this gain to input-dependent expert weighting.",
"Having a smaller number of heads for each expert, MAE -6 slightly underperforms MAE -7, and so does UNI-MAE -6 in comparison to UNI-MAE -7.",
"Finally, NOBCD gets worse performance than the transformer baseline, demonstrating the importance of the block coordinate decent training.",
"We observe similar trends on the IWSLT14 DE-EN dataset, summarized in Table 3.",
"The BASE model here is similar to the base-sized transformer in the WMT14 experiment, but with a smaller hidden dimension.",
"MAE -7 outperforms BASE by 0.9 BLEU.",
"Interestingly, UNI-MAE -7 improves over BASE by 0.3 BLEU, possibly because the regularization effect of random expert selection training helps more on this smaller dataset.",
"14 4.3 Token-level Language Modeling Dataset.",
"We experiment with the WikiText-103 dataset (Merity et al., 2016).",
"It contains articles 13 https://github.com/tensorflow/ tensor2tensor/blob/master/tensor2tensor/utils/get_ende_bleu.sh 14 Selecting an expert can be seen dropping one attention head in training ( 3).",
"from English Wikipedia, with a 268K-sized vocabulary.",
"The training/development/test data respectively have 103M/218K/246K tokens.",
"Setting.",
"Here the BASE model is the strong language model by Baevski and Auli (2019).",
"It is based on a 16-layer transformer network; each multi-head attention layer has 8 heads.",
"It uses different embedding dimensions for the tokens, based on their frequencies.",
"We closely follow Baevski and Auli (2019) in terms of hyperparameters and training procedures.",
"The readers are referred to their paper and Appendix A for further architecture and hyperparameter details.",
"Notes on context size.",
"Baevski and Auli (2019) study the effect of context window, i.e., the number of history tokens the model attends over.",
"They find that using larger context sizes lead to better performance (Baevski and Auli, 2019, Table 5).",
"Their best setting uses a 3,072 training context size, and 2,048 at test time (i.e., the model has access 2,048 tokens before predicting any token at test time).",
"However, we are not able to train MAE, Model Perplexity # Params.",
"nor replicate their results, under this settingour GPUs have far less memory, and it is impossible to even load a 3,072-token context chunk.",
"15 Therefore we train and evaluate MAE and UNI-MAE -7 with smaller 512/480 context sizes, also explored by Baevski and Auli (2019), which allows for a head-to-head comparison.",
"Results.",
"Table 4 shows the perplexity on WikiText-103 test data.",
"When trained under the same setting, MAE outperforms Baevski and Auli (2019) by more than 0.3 perplexity.",
"Interestingly, despite the much smaller context at both training and test time, MAE matches the best setting by Baevski and Auli (2019).",
"UNI-MAE -7 and NOBCD underperform the baseline (higher per-plexity).",
"This section first empirically confirms that MAE learns to activate different experts on different inputs in 5.1.",
"We then run a synthetic experiment to explore MAE's potential in transfer learning ( 5.2).",
"One of the appealing properties of MoE models is that they could learn to activate different experts, depending on what expertise is needed for the 15 Baevski and Auli (2019) use NVIDIA Tesla V100 GPUs with 32GB memory, while we only have access to GeForce RTX 2080 Ti, with 11GB memory.",
"input.",
"Does MAE learn to do so?",
"We empirically study this question, and present evidence indicating that it does, at least in part.",
"We consider the encoders of the UNI-MAE -7, NOBCD, and the MAE -7 models trained on WMT14.",
"16 We first study whether BCD training helps drift-ing MAE away from uniformly weighting the experts agnostic to the inputs.",
"We treat the gating values as probabilities, and calculate their entropies: H ( g ) = (cid:80) hi =1 g i log g i , which are then averaged across different layers.",
"The average entropy on the development set for MAE -7 is 1.91, lower than the 2.02 by the NOBCD model trained without BCD.",
"In comparison, UNI-MAE -7 uniformly weights the experts and has the entropy of 2.08.",
"This indicates that gating weights of MAE trained with BCD are more focused on one or a subset of experts than trained without.",
"Second, we study whether MAE learns to specialize different experts for different inputs.",
"To do so we attribute the development instances to the experts that maximize the gating weights.",
"For the first encoder layer of MAE -7, the percentages of instances attributed to each of the 8 experts are relatively balanced: 13%, 14%, 9%, 16%, 10%, 15%, 10%, 12%.",
"17 This suggests that all experts are assigned a substantial part of the input, and it is not the case that BCD leads to a rich get richer outcome.",
"We then continue and explore whether MAE performs reasonably well when using only the most specialized experts.",
"For each development instance, we select those experts maximizing the 16 The same experiments can be done with the decoders, where the inputs to gating functions are German sentences.",
"The authors lack German expertise, and interpretation of a following analysis would not have been possible for us.",
"gating weights and ignore the rest, instead of linearly combining them as in Eq.",
"6.",
"We see from Table 5 a 0.3 BLEU decrease under this setting.",
"In comparison, NOBCD has a larger performance decrease of 0.7 BLEU.",
"NO BCD's performance drop is similar to that of UNI-MAE -7, for which we randomly select an expert at each layer and average the performance over 5 runs.",
"These results support the proposition that MAE specializes better when trained with BCD.",
"Finally, we search for the tokens that are more likely to activate each expert.",
"We compute the pointwise mutual information (PMI; Church and Hanks, 1990) between tokens and experts: PMI( token i , expert j ) = log p ( token i , expert j ) p ( token i ) p ( expert j ) .",
"Table 6 lists the most indicative tokens of each expert, for the first layer.",
"While some of the terms for some experts seem loosely related (e.g., bell, reuters , and computing for expert 2, it is hard to find clear patterns in most of them.",
"We now turn to evaluate another property of MAE: its potential for data-efficient transfer learning, by only updating the gating functions, freezing the experts.",
"We consider the pretrain-then-finetune setting.",
"Due to computation limits, we are unable to explore MAE for pre-training contextual representations (Peters et al., 2018; Devlin et al., 2019).",
"Rather, we focus on the following small-scale machine translation experiments.",
"Setting.",
"We explore finetuning on IWSLT14 EN-DE data, a MAE model pretrained on the 20 40 60 80 100 % Training Data 31.0 31.2 31.4 31.6 31.8 32.0 32.2 D e v .",
"FTG finetunes the gating functions' parameters (i.e., ), keeping the rest frozen.",
"FT G+ updates the parameter matrix W in Eq.",
"4 in addition to .",
"The rest of the model parameters are fixed.",
"FTALL updates all parameters.",
"As a baseline, NOFT is the out-of-box pretrained model without any finetuning.",
"SCRATCH trains a MAE model from scratch.",
"Table 7 summarizes the IWSLT14 EN-DE development set performance.",
"Surprisingly, NOFT already outperforms SCRATCH without any finetuning.",
"We attribute this improvement to the larger pretraining (WMT14) data.",
"Only updating the gating functions, FTG improves over NOFT by 0.8 BLEU.",
"Yet there is still a significant gap of 1.8 BLEU between FTG and FTALL .",
"Interestingly, FT G+ almost matches the performance of FTALL , but only updates 1/9 as many parameters.",
"Both FTG and FT G+ reach the best performance after around 1K gradient updates, i.e., one epoch, sig-nificantly less than FTALL or SCRATCH .",
"We further compare FT G+ and FTALL where less downstream training data is available.",
"To simulate this, we randomly sample [5%, 10%, 25%, 50%, 75%] subsets of IWSLT14 training data, on which the pretrained model is finetuned.",
"Figure 2 plots their performance.",
"We see a clear trend: as less training data is available, the gap between FT G+ and FTALL decreases; when less than 20% of the training data is available, FT G+ outperforms FTALL .",
"These results suggest that finetuning MAE with FT G+ can be viable in low-resource transfer learning.",
"Multi-head attention.",
"An increasing amount of effort has been devoted into developing better attention mechanisms (Malaviya et al., 2018; Deng et al., 2018; Sukhbaatar et al., 2019; Correia et al., 2019; Maruf et al., 2019, inter alia ), and improving transformer architectures (Shaw et al., 2018; Dehghani et al., 2019; Hao et al., 2019; Correia et al., 2019; Yang et al., 2019a, inter alia ).",
"Closely related, Iida et al. (2019) applies another attention mechanism over the attention heads, allowing a learned reweighting of them.",
"Our work focuses on the connection between multi-head attention and MoE, and the BCD training it suggests and ben-efits from.",
"Concurrent to our work, (Fan et al., 2020) study structurally pruning transformer layers for more efficient inference.",
"Another line of work aims to better understand the working of transformer models (Clark et al., 2019; Liu et al., 2019a; Tenney et al., 2019, inter alia ).",
"Mixture of experts.",
"One of the most successful applications of MoE is ensemble learning (Caru-ana et al., 2004; Liu et al., 2018; Dutt et al., 2017, inter alia ).",
"Recent efforts also explore MoE in sequence learning (Shazeer et al., 2017), and to promote diversity in text generation (He et al., 2018; Shen et al., 2019; Cho et al., 2019, inter alia ).",
"We presented MAE.",
"It is inspired by a mixture-of-experts perspective of multi-head attention.",
"With a learned gating function, MAE activates different experts on different inputs.",
"MAE is trained using a block coordinate descent algorithm, which alternates between updating the responsibilities of the experts and their parameters.",
"Our experiments show that MAE outperforms the transformer baselines on machine translation and language modeling benchmarks.",
"The analysis shows that MAE learns to activate different experts.",
"The code is publicly available at https://github.com/ Noahs-ARK/MAE .",
"We thank the anonymous reviewers, Yoav Artzi, Mandar Joshi, Jungo Kasai, Lingpeng Kong, Kenton Lee, Kelvin Luu, Will Merrill, Phoebe Mul-caire, Mark Neumann, Nikos Pappas, Ofir Press, Lianhui Qin, Swabha Swayamdipta, Vivek Sriku-mar, Sam Thomson, and Dani Yogatama for their helpful feedback.",
"This work was supported in part by NSF grant 1562364, a Google Fellowship, and NVIDIA Corporation through the donation of a Tesla GPU."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"We examine the new task of detecting derogatory compounds (e.g. curry muncher ).",
"Derogatory compounds are much more difficult to detect than derogatory unigrams (e.g. idiot ) since they are more sparsely represented in lexical resources previously found effective for this task (e.g. Wiktionary).",
"We propose an unsupervised classification approach that incorporates linguistic properties of compounds.",
"It mostly depends on a simple distributional representation.",
"We compare our approach against previously established methods proposed for extracting derogatory unigrams.",
"Abusive or offensive language is commonly defined as hurtful, derogatory or obscene utterances made by one person to another person.",
"1 Examples are (1)-(3).",
"In the literature, closely related terms include hate speech (Waseem and Hovy, 2016) or cyber bullying (Zhong et al., 2016).",
"While there may be nuanced differences in meaning, they are all compatible with the general definition above.",
"(1) stop editing this, you dumbass .",
"(2) Just want to slap the stupid out of these bimbos",
"(3) Go lick a pig you arab muslim piece of scum .",
"Due to the rise of user-generated web content, in particular on social media networks, the amount of abusive language is also steadily growing.",
"NLP methods are required to focus human review efforts towards the most relevant microposts.",
"A substantial amount of abusive utterances comprises derogatory words (e.g. bimbo or scum ).",
"Automatic extraction methods of such words are required since new derogatory words constantly enter language.",
"Wiegand et al. (2018a) extracted a 0 Present affiliation: Leibniz ScienceCampus, Heidel-berg/Mannheim, Germany 1 http://thelawdictionary.org/ large list of such expressions and demonstrated its importance for text classification.",
"In this work, we focus on a subtype of derogatory terms, namely derogatory compounds (e.g. booze hound , curry muncher , fault finder ).",
"Distinguishing such multi-word expressions from nonderogatory ones (e.g. fox hound , mile muncher , branch finder ) is more difficult than classifying unigrams since they are only sparsely represented in general-purpose lexical resources which have previously been found an effective source from which to learn abusive language, such as Wik-tionary.",
"2 For example, while 97% of the derogatory unigrams of the gold standard lexicon in Wiegand et al. (2018a) are contained in Wiktionary, less than 17% of the derogatory compounds used as our gold standard in this work can be found.",
"Despite their sparsity in lexical resources derogatory compounds are a frequent phenomenon, particularly in German data, which is why we study this task on that language.",
"On the German benchmark corpus for abusive language detection, the GermEval corpus (Wiegand et al., 2018b), we found that of the abusive microposts in the test set that include at least one derogatory expression, 39% contain a derogatory compound.",
"In our work, we focus on noun-noun compounds.",
"Each compound (e.g. curry muncher ) comprises two constituents, a modifier (i.e. curry ) and a head (i.e. muncher ).",
"On the GermEval corpus, 77% of the derogatory compounds are noun-noun compounds.",
"We only consider compounds whose constituents are not derogatory.",
"58% of the derogatory compounds on the GermEval corpus fall under this category.",
"Given publicly available lists of derogatory unigrams, the detection of derogatory compounds containing derogatory constituents (e.g. motherfucker ) is rather trivial.",
"There even exist abusive word generators employing such compounds.",
"3 We present the first study to detect derogatory noun-noun compounds and propose an unsupervised classification approach based on distributional information that does not require any properly labeled training data.",
"We demonstrate that linguistic features that have previously been found effective for the classification of derogatory unigrams are notably less effective for the detection of derogatory compounds.",
"We created a new dataset of derogatory compounds which will be made publicly available .",
"4 Our task is framed as a binary classification problem .",
"Each given compound is to be classified out of context as either derogatory or not.",
"For the sake of accessibility, we use English translations of our German compounds in this paper.",
"Lexical knowledge for the detection of abusive language has only received little attention in previous work (Schmidt and Wiegand, 2017), the notable exceptions are Razavi et al. (2010) who present a manually-compiled lexicon, Gitari et al. (2015) who bootstrap hate verbs and Wiegand et al. (2018a) who induce a lexicon of derogatory words.",
"In all these researches, however, derogatory compounds are not explicitly addressed.",
"We built a gold standard of derogatory compounds to train and test classifiers.",
"We inspected a range of websites containing derogatory word lists.",
"5 Since ambiguity is a massive problem in these lists 6 which makes them hardly usable for abusive language detection (Wie-gand et al., 2018a), we manually extracted noun-noun compounds which we considered unambiguously derogatory.",
"In order to produce nonderogatory compounds, we randomly sampled from the COW16 corpus (Schafer, 2015) for each derogatory compound (e.g. booze hound ) other compounds sharing the same head (e.g. fox hound , 3 http://sweary.com 4 https://github.com/uds-lsv/ offensive-compounds 5 www.hyperhero.com/de/insults.htm www.schimpfwoerter.dewww.seechat.de/warmduscher.htm 6 These lists contain many compounds commonly used in a non-offensive manner, e.g. Colatrinker (coke drinker) .",
"stag hound ).",
"Since among those putative nonderogatory instances, there could well be further derogatory compounds, we manually annotated them as well.",
"We limited the set of compounds sharing the same head, which we henceforth call head group , to 20 compounds.",
"Thus, we hope to avoid any biases towards particular heads.",
"We also looked at the natural distribution of heads on derogatory compounds.",
"As a proxy we considered the union of all derogatory compounds found on the above websites.",
"5 Figure 1 plots the frequency rank of the heads against the relative frequency of a particular head.",
"The plot suggests that the heads follow a power-law distribution (Zipf, 1965).",
"As a consequence, one cannot assume that this task could be solved by looking up heads in a finite lexicon with words that often form derogatory compounds in combination with different modifiers.",
"On a sample of 600 compounds, we measured a substantial agreement of Cohen's =0 .",
"61 (Landis and Koch, 1977) between 2 annotators.",
"Our final dataset (Table",
"1) comprises 3,500 compounds with only 11% being derogatory.",
"We also created a gold standard of derogatory unigram words in order to examine in how far derogatory compounds can be detected by a classifier trained on derogatory unigrams.",
"For this lexicon, we manually translated the base lexicon from Wiegand et al. (2018a) to German.",
"Our method does not require any labeled training data.",
"In the first step (4.1), we apply high-precision diagnostics for the detection of derogatory compounds.",
"The output are rankings in which derogatory compounds should be ranked highest.",
"In the second step (4.2), we combine and rerank the output of those diagnostics.",
"Our method largely relies on a distributional representation of our compounds.",
"We induced embeddings of our compounds using Word2Vec (Mikolov et al., 2013) on the COW16 corpus, which with its 30B tokens is one of the largest German corpora.",
"Since we exclusively work on German data and German compounds occur as closed compounds, e.g. Milchbube (milk sop) or Schnapsdrossel (booze hound) , we can employ standard tokenization 7 for inducing embeddings for our compounds.",
"Negative Polarity (NEG).",
"Derogatory words form a subset of negative polar expressions.",
"Due to their sparsity, however, derogatory compounds are rarely part of any sentiment lexicon (contain-ing polar expressions).",
"We, therefore, rank all our compounds according to their cosine similarity to a centroid embedding-vector computed from all negative polar expressions from the German PolArt sentiment lexicon (Klenner et al., 2009).",
"Compound Occurrence vs. Constituent Occurrence (COMCON).",
"Derogatory compounds can be creative word constructions (e.g. booze hound , oxygen thief , keyboard warrior ).",
"Consequently, their constituents are often not semantically related.",
"For instance, in booze hound , booze bears no common semantic relation to hound .",
"Therefore, the corpus frequency of a derogatory compound should be much higher than its constituents co-occurring in a sentence (i.e. with other words occurring in between).",
"Such co-occurrences should be coincidental.",
"We capture this by the following formula (fre-quencies are computed on the COW16 corpus): COMCON = # compound mentions in corpus # constituents co-occurring in sentence(1) In prose, COMCON ranks all compounds by the ratio of observed compound mentions and constituent co-occurrences in a sentence.",
"tory compounds, there should be a high frequency of compound mentions but only a low frequency of the constituents co-occurring in a sentence.",
"Therefore, COMCON will have a high score.",
"While there is a similarly high frequency of compound mentions for non-derogatory compounds, there is also a high frequency of the constituents co-occurring in a sentence since these constituents are usually semantically related (e.g. landowner or circus clown ).",
"This should result in COMCON producing comparably lower scores.",
"Derogatory Compound Must Be Person (PERSON).",
"We rank our compounds with regard to how likely they represent a person since many non-derogatory compounds represent either objects or animals (e.g. booze hound vs. sight hound , fox hound , stag hound ).",
"We first compute a centroid vector representing persons.",
"Then, we rank compounds by their similarity to that vector.",
"As a proxy for persons, we took embeddings of words representing professions, e.g. banker , lawyer , salesman .",
"We also experimented with personal pronouns as a proxy for persons.",
"However, we found them unsuitable since they are also often used as referring expressions to other entities, such as animals.",
"Professions, on the other hand, can only refer to humans.",
"The list of professions we used was created ad-hoc.",
"It should be reproducible in any arbitrary language.",
"The full list is included in the supplementary notes.",
"4 Outlier Compound(s) in Head Group (OUT).",
"In most head groups, derogatory compounds represent a clear minority with only 1 or 2 compounds.",
"The derogatory compounds are also often semantically different from the non-derogatory compounds ( keyboard warrior vs. rajput warrior , ninja warrior , samurai warrior ).",
"This is particularly true if the non-derogatory compounds are very homogeneous.",
"From that observation we derive a diagnostic in which we determine the semantic outlier(s) for each head group.",
"First, we compute for each compound the average pairwise similarity to all other compounds within its head group.",
"The resulting score of a compound (con-verted to a dissimilarity score by taking its inverse) is then multiplied by a weight representing the homogeneity of all compounds within that head group.",
"8 ( Pseudocode is provided in the supplementary notes. 4 ) This is done since for head 8 The homogeneity weight is the average pairwise similarity of all compounds belonging to the same head group.",
"Combination (COMB).",
"Negative polarity is a pre-requisite for being derogatory (Sood et al., 2012; Dinakar et al., 2012; Gitari et al., 2015).",
"Therefore, we base our combination on the ranking of NEG.",
"From that ranking we remove all those compounds which have not co-occurred at the high ranks of at least one of the other diagnostics (COMCON, OUT, PERSON).",
"9 Compounds that are highly ranked by several diagnostics should more likely represent derogatory compounds.",
"Re-Ranking by PageRank (PRANK).",
"We observed that among the top ranks of COMB, the derogatory compounds are semantically similar (e.g. dwarf tosser , mischief maker , slimeball ) while the non-derogatory compounds are semantically different from each other (e.g. biker club , spirit bear ).",
"Therefore, we run personalized PageRank (Agirre and Soroa, 2009) to further improve the ranking by enforcing the compounds on the high ranks to be distributionally similar.",
"We build a word-similarity graph where our compounds are nodes and edges encode cosine-similarities of their embeddings.",
"PageRank then produces a ranking of nodes where the highest ranked nodes are the ones most highly connected.",
"In personalized PageRank prior information is added.",
"A biased graph is constructed in which attention is drawn towards particular regions of interest.",
"This is achieved by assigning re-entrance weights to the individual nodes.",
"As prior information, we set the nodes representing the compounds returned by COMB with a uniform re-entrance weight ( ) 10 while all other nodes receive a weight of 0.",
"Label Propagation (LP).",
"While previous diagnostics were designed to isolate a few derogatory compounds with a high precision, LP aims for increasing recall.",
"We define some high-precision seeds for the two categories of our task and then propagate the labels to the unlabeled compounds by using label propagation (Talukdar et al., 2008).",
"The algorithm operates on the same word-similarity graph that we used for PRANK.",
"We define highly ranked compounds from PRANK as 9 We took top 350 from all these rankings which resembles the number of derogatory compounds on our dataset.",
"derogatory seeds and lowly ranked compounds as non-derogatory seeds.",
"Unlike the previous diagnostics, the output of LP is a binary categorization rather than a ranking.",
"In order to make this output comparable to the other diagnostics, we converted the output of LP to a ranking.",
"This is achieved by ranking the compounds predicted as derogatory according to the confidence score provided by the classifier.",
"Table 2 shows the precision at rank n (P@ n ) of different rankings as measured on our compound gold standard.",
"For LP, we consider the top 50 compounds from PRANK as derogatory seeds and the bottom 500 as non-derogatory seeds.",
"11 As a baseline we add a randomized ranking (RAND).",
"PRANK produces a very high precision on the high ranks, outperforming the individual rankings and COMB.",
"We also tested a modifica-tion, PRANKNEG , which applies personalized PageRank on the output of NEG, which is the strongest individual ranking.",
"Since PRANK outperforms PRANKNEG , we conclude that the high precision of PRANK also depends on the combination of the individual rankings.",
"LP manages to notably raise scores on the lower ranks (e.g. P@300) which proves the advantage of LP over PRANK.",
"Table 3 compares our proposed method (LP) against supervised classifiers.",
"We evaluate the entire classification output (with F1-measure) rather than a ranking.",
"The classifiers are trained on our unigram or compound gold standard (3).",
"For the latter case, we conducted 10-fold crossvalidation.",
"500 of the 3500 compounds were reserved as a development set on which we tuned hyperparame-ters of the supervised classifiers.",
"( The supplementary notes 4 contain more details. )",
"As features we consider word embeddings and the linguistic features from Wiegand et al. (2018a).",
"They are based on knowledge that is expensive to produce, such as sentiment views, polar intensity, or information from Wiktionary.",
"12 Table 3 shows that learning from the compound gold standard is more effective than learning from the existing unigram gold standard.",
"Given the 11 The ratio of derogatory and non-derogatory compounds should vaguely reflect the class distribution.",
"Classifier SVM (embeddings+linguistic) LSTM Unit head modifier compound combined characters F1 57.0 60.2 74.7 69.0 54.5 Table 4: Comparison of compositional approaches.",
"strong performance of embeddings, we also examined the performance of (publicly available) off-the-shelf embeddings 13 and found that the high classification scores can be mainly ascribed to the large corpus on which we induced our embeddings (i.e. COW16).",
"Our unsupervised approach (LP) is almost on a par with the most complex SVM.",
"This is particularly appealing since we produced that classifier without manually labeled training data and those manually-created resources required for the linguistic features.",
"Compound embeddings are the most predictive information for our task, but even from the large COW16 corpus, we only obtained embeddings for 60% of our compounds.",
"14 In Table 4, we evaluate compositional information, which can also be used for compounds that lack an embedding.",
"We apply an SVM with the best previous feature set 13 We took the publicly available embeddings induced on Twitter-data from: www.spinningbytes.com/ resources/wordembeddings/ 14 For the remaining compounds, we used dummy vectors.",
"(of which embeddings are the main contributor) on the constituents of the compounds.",
"Moreover, we train an LSTM on the sequence of characters of the compound.",
"Table 4 shows that information drawn from units other than the compound itself is less effective.",
"The feature combination of head, modifier and compound is not effective either.",
"Instead of applying embeddings on constituents and concatenating them, we also examine a sophisticated compositional model ( Wmask ) based on a masking process that takes into account the variation of a constituent depending on whether it is a head or a modifier (Dima, 2015).",
"Table 5 shows the performance of the two best previous classifiers where compounds lacking an embedding are represented by an embedding approximated by Wmask (rather than a dummy vector).",
"The table shows that the two classifiers can be improved by adding the approximated embeddings.",
"We examined the new task of detecting derogatory compounds and proposed an unsupervised approach incorporating linguistic properties of compounds that mostly depend on a distributional representation.",
"Our method outperforms linguistic features previously shown to be effective for the detection of derogatory unigrams and it is on a par with a far more expensive state-of-the-art supervised approach.",
"Features defined on the constituents of a compound and training a classifier on derogatory unigrams are far less effective.",
"The authors would like to thank Ines Rehbein for feedback on earlier drafts of this paper.",
"We are also grateful to Corina Dima for helping us to run her semantic composition toolkit wordcomp on our data.",
"The authors were partially supported by the German Research Foundation (DFG) under grants RU 1873/2-1 and WI 4204/2-1."
] | [
"objective",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"result",
"method",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"method",
"other",
"other",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"Visual Dialog is a multi-modal task that requires a model to participate in a multi-turn human dialog grounded on an image, and generate correct, human-like responses.",
"In this paper, we propose a novel Adversarial Multimodal Feature Encoding (AMFE) framework for effective and robust auxiliary training of visual dialog systems.",
"AMFE can force the language-encoding part of a model to generate hidden states in a distribution closely related to the distribution of real-world images, resulting in language features containing general knowledge from both modalities by nature, which can help generate both more correct and more general responses with reasonably low time cost.",
"Experimental results show that AMFE can steadily bring performance gains to different models on different scales of data.",
"Our method outperforms both the supervised learning baselines and other fine-tuning methods, achieving state-of-the-art results on most metrics of VisDial v0.5/v0.9 generative tasks.",
"In recent years, there has been a rising attention in Artificial Intelligence on how to train a model to understand visual inputs from the physical world, and communicate them with human language.",
"Typical problems include Visual Question Answering (VQA) (Antol et al., 2015) and Image Captioning (Xu et al., 2015).",
"These tasks require a model to read an image and generate a proper response, such as answering a question grounded on the image, or generating a sentence to describe the image.",
"As a more difficult extension, Visual Dialog (De Vries et al., 2017; Das et al., 2017a; Mostafazadeh et al., 2017) is a cluster of tasks featuring two agents conducting a multi-turn dialog grounded on an image.",
"A model is usually trained Corresponding Author to predict every single response of one agent in the two, based on the image and dialog history.",
"There are also some different task settings such as directly training two agents to complete a goal-driven cooperative task such as Guessing Game (Das et al., 2017b).",
"Tasks involving both the physical world (vi-sual images) and abstract world (languages) share a core issue: how to establish connections between these two worlds, and is there a framework to leverage these connections for learning?",
"Temporarily, the majority of answers are learning end-to-end models with multi-modal feature fusion (Kim et al., 2016; Fukui et al., 2016; Yu et al., 2018).",
"These methods usually merge the visual and language features into rich representations containing information from both sides.",
"Some cross-modal attention methods (Lu et al., 2016; Nam et al., 2017) formulate the visual-language connections explicitly by parameterizing the attention weights to learn whether there is high correlation within certain pairs of language and visual feature vectors.",
"However, in all these works, the merged representations or attention weights are only learned from pairwise (one image, one sentence) co-occurrence, and serve for the optimization of a loss function only related to the final ground-truth response.",
"In fact, the features from both sides are not truly connected in an aspect of general distributions, but only merged into a new vector for each training/testing sample.",
"We suppose that this is not good enough for a model to distill knowledge from both of the two worlds because the language/visual vectors do not contain knowledge from the other modality in the bottom level before they are merged.",
"In this paper, we discuss another possibility.",
"We want to establish an unsupervised framework of multi-modal encoding, which directly generates an image feature distribution from a language 2589 distribution, or vice versa.",
"For example, when a neural network based model receives a natural language sentence x as input, it encodes x into a sequence of high-dimensional continuous vectors.",
"All these language vectors can be projected into another latent space to have a new distribution p l .",
"We train the language encoder to let the new distribution p l be the same as, or very close to, the distribution p v of all image features observed and encoded in the task data.",
"Since we can partly recover a real-world image distribution from the language vectors achieved in this way, these language vectors intrinsically contain both language semantics and real-world image properties.",
"This is a higher-level connection between the two worlds.",
"In order to train a model to generate samples subject to a certain distribution p v from an original distribution p l , Generative Adversarial Networks (GANs) have been proved very effective (Goodfellow et al., 2014; Arjovsky et al., 2017; Miyato et al., 2018).",
"Lample et al. (2018) used adversarial training on the vectors produced by sentence encoders for different languages in unsupervised machine translation.",
"However, different languages in their task are in single modality and share encoder structures, making the same method not directly usable and extendable for multi-modal tasks with largely different prior distributions and complex encoder structures with attention.",
"In our work, we propose Adversarial Multi-modal Feature Encoding (AMFE), a novel GAN-based training schedule with an attention-based sample-selecting method, which can successfully force the multi-modal vectors to have closely related distributions, benefitting the performances of various visual dialog systems.",
"We test our method on the VisDial (Das et al., 2017a) benchmark (one example is shown in Table 1).",
"A normal sample of VisDial contains an image and 10 turns of question-answering dialog from two people grounded on the image.",
"A series of models have been proposed to solve the task, including memory and attention based models (Das et al., 2017a), reinforcement learning (Das et al., 2017b), knowledge transfer techniques (Lu et al., 2017) and GAN (Wu et al., 2017).",
"Wu et al. (2017) designed a complex attention model and applied GAN in a traditional way to force the generated tokens to mimic real-world language (language vs. language), making their model only trainable through sequence Caption : Adogwithgogglesisina motorcyclesidecarA(1):canyoutellwhatkindofdogthisisB(1):helookslikebeautifulpitbullmix A(2): canyoutellifmotorcycleismoving orstill B(2): it'sparked A(3): isdog'stonguelollingout B(3): notreally Table 1: An example from VisDial dataset.",
"sampling and reinforcement learning.",
"Our work, on the other hand, applies a directly differentiable GAN on continuous vectors as a multi-modal feature encoding method (language vs. image).",
"Our contributions include: We propose AMFE: a novel Adversarial Multi-modal Feature Encoding framework to benefit visual dialog models.",
"The core idea is to force features from different modalities to have closely related distributions.",
"We develop efficient AMFE implementations, including a novel attention-based sample selecting method, for various commonly-used visual dialog models.",
"Experimental results show that AMFE brings robust performance gains to different visual dialog models.",
"We achieve state-of-the-arts on most metrics of VisDial v0.5/v0.9 generative tasks.",
"Visual Dialog is a cluster of tasks sharing two properties: multi-turn and cross-modality.",
"VisDial (Das et al., 2017a) is a widely-used benchmark with question-answering style dialogs grounded on real-world images.",
"As a special case of dialog generation tasks, VisDial share some of the research concerns with single-modal natural language dialog generation (Dhingra et al., 2016; Serban et al., 2017; Sordoni et al., 2015; Serban et al., 2016; Liu et al., 2016).",
"Natural language dialogs are usually discrete, state-dependent and style-free, thus some reinforcement learning (RL) methods have been proposed (Li et al., 2016).",
"Das et al. (2017b) built an cooperative image guessing task on VisDial: they train both the questioner and the answerer, making them complete a same goal to help the questioner produce a guessing or imagination of the unseen image described by 2590 the answerer.",
"The distance between the guessing and the target image is used as reward for reinforcement learning.",
"In some extreme settings, such a task definition can even lead to emergence of a new language between robots (Kottur et al., 2017).",
"After per-training, using their reinforcement learning method as an auxiliary loss can also bring performance gain in standard VisDial metrics such as mean rank.",
"However, generating a reward based on just one target image for a training sample may lead to a kind of overfitting.",
"Language is highly abstract: one dialog can correctly describe a lot of different scenes in real world, so why should we force a dialog to fit one single example among them?",
"Therefore, generating a reward from adversarial training is a more efficient way because it goes beyond individual samples into distributions.",
"There are two previous works (Wu et al., 2017; Lu et al., 2017) that use GAN-like methods to boost the performances of pre-trained VisDial models.",
"(Wu et al., 2017) proposes to use adversarial reinforcement learning.",
"A discriminator is trained to distinguish the tokens of real/generated answers, and the answerer (generator) is trained via RL using a reward related to the score given by discriminator.",
"This method is very effective, but using both RL Monte Carlo and GAN brings high computational cost.",
"Also, a lot of tricks are involved for a good training.",
"Our method, on the other hand, does not need Monte-Carlo sampling to compute immediate reward while generating each of the N words in a sentence ( O ( N ) time cost).",
"(Lu et al., 2017) uses a knowledge-transferring method between generative and discriminative task settings.",
"However, this requires the models on both settings to be pre-trained well enough.",
"Our work is also an adversarial learning based method, but it is more robust, time-efficient and effective.",
"GAN (Goodfellow et al., 2014; Arjovsky et al., 2017; Miyato et al., 2018) has raised much attention because of its ability to directly generate samples subject to a target distribution.",
"Many training techniques have been proposed to solve the unstable training problems of GAN (Gulrajani et al., 2017; Kurach et al., 2018).",
"Wasserstein GAN (WGAN) (Arjovsky et al., 2017) is a successful method using critic learning loss and weight clipping operations.",
"We borrow some ideas from WGAN in the adversarial training of our model.",
"GAN well suits the image generation tasks because image signals are continuous and thus differentiable, enabling the gradient directly flowing back from the discriminator to generator.",
"In language generation tasks, however, how to deal with the discrete sequence of symbols generated by the generator has long been a problem.",
"A widely-used solution is applying RL with rewards generated by the discriminator (Wang et al., 2018; Li et al., 2017).",
"As mentioned above, this is time-costing because RL needs to explore a large action space by sampling multiple action sequence.",
"Besides, how the immediate reward is computed after generating each word is also a difficult problem.",
"Another solution is to avoid the discrete problem by applying adversarial training on the hidden states of the generator.",
"This requires that there is a known distribution p for the hidden states we want the model to generate.",
"A successful case is reported by (Lample et al., 2018): using adversarial training to restrict the hidden states of source language and target language (both from vanilla LSTMs) into a same latent space can boost the performance of unsupervised machine translation.",
"Our AMFE framework is also an adversarial training on the language hidden states, but we are the first to use this kind of methods to establish connections between different modalities.",
"Our training procedure is also largely different from (Lample et al., 2018) with our modified WGAN-like algorithm and a novel attention-based sample selection method: they are critical for training convergency on multi-modal tasks, with complex attention-based model structures.",
"We first define the task and our framework formally, and then describe how it is implemented and trained on different visual dialog models.",
"In the VisDial task, each sample contains an image I , a caption sentence C and a dialog D with T = 10 turns in total.",
"In each turn t , there is a question q t about the image, and a ground truth answer a t .",
"The model needs to read the dialog history H = { C, ( q 1 , a 1 ) , ..., ( q t 1 , a t 1 ) } and image I , to generate an answer as a response to q t .",
"We rewrite H t = ( q t , a t ) and H 0 = C .",
"Formally, the dialog agent (named A-Bot) outputs an answer Image Encoding Language Encoding h v h l A: Is there a man or a woman?",
"The goal of our Adversarial Multi-modal Feature Encoding (AMFE) is to restrict the distribution of feature representations from one modality m 1 to be closely related to that from another modality m 2 .",
"We take m 1 = l ( anguange ) and m 2 = v ( isual ) .",
"Specifically, A-Bot encodes language inputs into vectors h l , and visual inputs into h v , respectively.",
"We want h l and h v to have indistinguishable distributions: h l , h v p ( h ) .",
"To achieve this goal, we use a discriminative model (named D-Bot) to classify whether a vector encoded by A-Bot comes from modality l or v .",
"D-Bot is trained with real h l and h v samples, while A-Bot is trained to generate language vectors h l that can confuse D-Bot to classify them as label v .",
"Figure 1 shows our framework.",
"We implement our AMFE method on two commonly-used visual dialog models, using them as A-Bot.",
"The two A-Bot models are named Hierarchical Recurrent Encoder (HRE) and History-Conditioned Image Attentive Encoder (HCIAE), respectively.",
"A-Bot learns to predict the right answer in each turn.",
"In this process, it also encodes language and visual inputs into h l and h v samples, which we use for AMFE training.",
"HRE is a hierarchical LSTM (Hochreiter and Schmidhuber, 1997) model used in (Das et al., 2017a,b).",
"In HRE, a pre-trained Convolutional Neural Network (CNN) encodes the image into a single feature vector, which is further mapped into a visual representation I by a trainable MultiLayer Perceptron (MLP).",
"In each turn t , the question is encoded by a word-level LSTM into a question vector q t , and the dialog history in the previous step H t 1 is encoded by another LSTM into vector f t 1 .",
"There is a state-tracker LST M st on the top level: LST M st is forwarded one step each turn, integrating all the encoded vectors mentioned above.",
"It reads the encoded history f t 1 , image vector I , the question q t and the previous hidden state s t 1 from itself, and produces the new hidden state representation s t : s t = LST M st ([ q t , I, f t 1 ] , s t 1 ) , (3) where [ ] stands for concatenation.",
"The answer decoder in HRE is an LSTM that takes s t as initial state, and predicts one word at a time by a softmax probability over the vocabulary, to generate the whole answer sentence.",
"Figure",
"2(a) shows the encoder structure of HRE.",
"We use image vectors I as h v samples (dark green) in AMFE, and both the q and f vectors as h l samples (pink).",
"HCIAE model (Lu et al., 2017) contains an textual attention on all history vectors based on the question, and a visual attention based on both the history and the question.",
"In detail, it uses a pre-trained CNN to encode the image into a set of visual feature vectors V .",
"Each vector in V is further passed through a trainable MLP, resulting in a visual feature set { i 0 , ...i K 1 } .",
"In each turn t , the question is encoded by an LSTM into vector q t ; the dialog history { H 0 , H 1 , ..., H t 1 } is encoded by another word-level LSTM into vectors { f 0 , ..., f t 1 } .",
"The attention weight between q t and each history vector f j is computed as: z jt = Ta tanh ( W f f j + W q q t ) , jt = softmax ( z jt ) , (4) where Ta R d 1 , W f R d d , W q R d d are trainable parameters; d is the length of both 2592 LSTM LSTM CNNI Q t H t-1 s t s t-1 s t+1 LSTM st Image Encoder Sentence Encoders [ ] h v candidates h l candidates",
"question and history features.",
"A memory vector m t is computed by: m t = t 1 (cid:2) j =0 jt f j .",
"The memory vector is further used as a key to compute a similar visual attention over { i 0 , ...i k 1 } to achieve a final image vector v t .",
"The final output of the encoder is computed by: e t = tanh ( W e [ q t , m t , v t ]) , (6) where W e R d 3 d is trainable parameters; [ ] stands for concatenation.",
"The answer decoder is an LSTM like that of HRE, taking e t as input.",
"Figure",
"2(b) shows the structure of HCIAE encoder.",
"HCIAE produces more visual vectors for each image than HRE.",
"We take all the q and f vectors as h l candidates of AMFE, and the spatial visual features { i 0 , ...i k 1 } as candidates for h v .",
"Despite the multiple choices of A-Bot, our D-Bot is always an MLP with two hidden layers of size 512 and ReLU activation.",
"It is used to compute a loss function that forces all the h l samples to be subject to the same distribution p ( h ) as the visual vectors h v .",
"D-Bot takes a vector h in size d as its input, and predicts the probability of h coming from real image distribution and the visual encoder: p v ( modality = v | h ) = DBot ( h ) .",
"D-Bot is the discriminator from a GAN viewpoint.",
"A-Bot must learn to confuse D-Bot in order to generate language features indistinguishable from image features.",
"To train our model, we use standard supervised training with cross-entropy loss function for pretraining, and add in adversarial training to produce an auxiliary loss to improve feature encoding.",
"The supervised learning loss is: L su = 1 NN (cid:2) n =1 log( p ( w tn | w t<n )) , (8) where N is the full length of the decoded sentence.",
"(9) We sum L adv as an auxiliary loss with a tunable weight , making A-Bot minimize: LG = L su + L adv .",
"(10)",
"On the other hand, D-Bot maximizes the following objective to distinguish real-world image vectors h v from the language vectors h l : LD = E h v [ DBot ( h v )] E h l [ DBot ( h l )] .",
"(11)",
"We switch between A-Bot and D-Bot updates for each batch of dialog samples.",
"For adversarial learning, A-Bot is trained to minimize the probability that D-Bot predicts the generated features to be fake samples.",
"Following WGAN (Arjovsky et al., 2017), we do not use logarithm but directly optimize the likelihood itself: L adv = E h l [ DBot ( h l )] .",
"We have specified where the h l and h v samples come from while using different A-Bots in Section 3.3.",
"Typically, for each batch of samples with batch-size M , in each turn t , there are M question vectors and M t history vectors as h l candidates.",
"For HRE encoder, there are M image vectors as h v candidates, while the number is M K for the HCIAE encoder; K is the number of pixels in the final CNN feature-map.",
"Thus, it is impossible to use all the generated samples in AMFE.",
"For a successful training, the selected samples must be efficient, informative and balanced.",
"While using HCIAE, in order to compute L adv , we use M question vectors and M w history vectors as h l samples.",
"The history vectors are selected using textual attention weights jt produced by the temporary model: for each dialog, we pick the top w history vectors with the highest attention weights.",
"We call this Attention-based Sample Selection (AbS).",
"While computing LD to train DBot, we use the same technique on the image, using the top-attended M w image vectors, together with another M image vectors randomly sampled from the dataset as positive samples h v .",
"The M question vectors and M w history vectors are used as a pool of negative samples h l .",
"In our experiments, w = 1 , 2 works well.",
"While using HRE, since the model always at-tends on f t 1 by default (Eq. 3), we directly select q t and f t 1 as h l samples.",
"We use the M image vectors I in this batch, together with another M image vectors randomly sampled from the dataset as the pool of h v .",
"The full training procedure is specified in Algorithm",
"1. 4 Experiments 4.1 The VisDial Dataset VisDial is a visual dialog dataset based on MS COCO (Lin et al., 2014) images.",
"There are 10 turns of human-posed question-answering dialogs on each image, with the questioner kept not seeing the image during the data collection process.",
"For generative models, a model must give the probability of generating each candidate answer without seeing other candidates, and the rank of the ground-truth answer in the 100 candidates is used to compute different evaluation metrics; for discriminative models, the model can read and encode all the candidate answers and directly assign scores on them.",
"According to the nature of GANs Algorithm 1 AMFE Training Procedure.",
"and similarities to real-world application scenarios, we use the generative setting for our model: it is equipped with a sequential decoder instead of a scoring module.",
"For fair and sufficient comparison, we evaluate our model on both VisDial v0.5 and VisDial v0.9.",
"VisDial v0.5 has 68k COCO images, for a total of 680k QA-pairs.",
"Following (Das et al., 2017a) and (Das et al., 2017b), we use 50,729 images for training, 7,663 for validation and 9,628 for testing.",
"Visdial v0.9 has 123,287 images.",
"There are different splitting of train/valid/test in previous work.",
"We follow (Lu et al., 2017) to use 82k for training, 1k for validation and 40k for testing.",
"1 We compare our results to several existing models on the VisDial dataset, including: Answer Prior (Das et al., 2017a): directly encoding answer candidates with an LSTM and scoring by a linear model that captures the frequency of answers in the training set.",
"NN-QI (Das et al., 2017a): a k-Nearest Neighborhood method considering only the question and the image.",
"Unlike generative methods, both Answer Prior and NN-QI need to know the answer candidates.",
"LF-QIH-G (Das et al., 2017a): a Late Fusion encoder that encodes the question, image and history separately.",
"The encoded features are concatenated and linearly transformed to a 1 VisDial has released v1.0 recently, and claims that models trained on v0.9 should also use the new v1.0 test set.",
"Due to lack of baselines in the generative task, we follow the original widely-used settings of v0.5 and v0.9.",
"joint representation.",
"The answer is produced by a generative decoder.",
"HRE (Das et al., 2017b): the HRE model introduced in Section 3.3.",
"HREA-QIH-G (Das et al., 2017a): a modified HRE A-Bot with attention to dialog history.",
"MN-QIH-G (Das et al., 2017a): a Memory Network encoder that stores each piece of dialog history embeddings in an explicit memory.",
"These embeddings can be attended and fused while generating the answer.",
"HCIAE (Lu et al., 2017): the HCIAE model introduced in Section 3.3.",
"CoAtt (Wu et al., 2017): this is a previous state-of-the-art model with a more complex co-attention encoder; the decoder is enhanced by adversarial reinforcement learning for better answer generation.",
"We first test the efficiency of AMFE on the simpler A-Bot model: HRE.",
"We use VisDial v0.5 as our benchmark for fair comparison with other HRE-based models and auxiliary training methods.",
"For Visdial v0.5 dataset, we follow the preprocessing procedure and hyper-parameters described in (Das et al., 2017b).",
"We pass each image through a pre-trained VGG-16 (Simonyan and Zisserman, 2015) CNN, and pick the single f7 vector as input image feature.",
"We limit the maximum lengths of captions, questions and answers to be 40, 20 and 20, respectively; we remove words appearing less than 5 times in the training set, and replace them by a UNK token.",
"We use vector size 300 for word embedding and 512 for all language and visual feature vectors.",
"All LSTMs have two layers.",
"We pre-train A-Bot with L su for 20 epochs before L adv is added in.",
"The batch-size is set to be 32.",
"After each update of A-Bot, we perform 5 DBot updates.",
"We use the 32 encoded image vectors in the batch, together with 32 image vectors randomly sampled from the dataset, to form 64 positive samples; for negative samples, we use the 32 question vectors and 32 history vectors ( t 1 ) from the updated A-Bot.",
"We use Adam (Kingma and Ba, 2014) for A-Bot and RMSprop (Tieleman and Hinton, 2012) algorithm for D-Bot to perform gradient descending.",
"The learning rate is set to 1e-3 Model MRR R@1 R@5 R@10 Mean Answer Prior 0.311 19.85 39.14 44.28 31.56 NN-QI 0.385 29.71 46.57 49.86 30.90 LF-QIH-G 0.430 33.27 51.96 58.09 23.04 HREA-QIH-G 0.442 34.47 53.43 59.73 21.83 MN-QIH-G 0.443 34.62 53.74 60.18 21.69 HRE-MLE 0.436 33.02 53.41 60.09 21.83 Frozen-Q-Multi 0.437 33.22 53.67 60.48 21.13 HRE-AMFE 0.445 34.62 53.95 60.76 20.98 Table 2: VisDial v0.5 evaluation results.",
"for pre-training, further decayed to 5e-5; after adversarial training starts, the learning rate is fixed to 5e-5 for both Aand D-Bot.",
"In the weight clipping step of WGAN (Arjovsky et al., 2017), we use a clipping parameter c = 0 .",
"01 .",
"On VisDial v0.5, two previous top models are a Memory Network based model (MN-QIH-G) by (Das et al., 2017a) and a multi-loss training on HRE encoder (Frozen-Q-Multi) based on goal-driven reinforcement learning (Das et al., 2017b).",
"We start from the same HRE hyper-parameters and checkpoint as (Das et al., 2017b), but continue with our AMFE instead of reinforcement learning.",
"Table 2 shows the results on all the five evaluation metrics on VisDial v0.5.",
"Results in the first 4 rows are copied from (Das et al., 2017a).",
"AMFE achieves better performances than the supervised training of A-Bot model (HRE-MLE), especially significant on R@5, R@10 and mean rank, indicating that the adversarial feature encoding results in generally better dialogs.",
"It also outperforms the another HRE-like model with history attentions (HREA-QIH-G).",
"While used for multi-loss training, AMFE is significantly better than Frozen-Q-Multi, setting a new state-of-the-art on all metrics.",
"We point out that in Frozen-Q-Multi (Das et al., 2017b), the goal-driven reinforcement leaning reward is computed pair-wise (consider-ing how much can the questioner rebuild the image from the answerer's words), but the reward computed with a single image is not good enough to evaluate the dialog actions.",
"This is because language is much more abstract than image, and failure to recover an image does not necessarily mean that the dialog is actually bad.",
"Our method could avoid this issue because adversarial training is based on general distributions.",
"In this section, we test the efficiency of AMFE for the HCIAE model with attention.",
"We use VisDial v0.9 as our benchmark for fair comparison with (Lu et al., 2017).",
"For Visdial v0.9 dataset, we follow the preprocessing procedure and HCIAE structure described in (Lu et al., 2017).",
"We pass each image through a pre-trained VGG-19 CNN, resulting in a 512 7 7 feature-map as visual input.",
"To speed up convergence, we add a Batch Normalization (Ioffe and Szegedy, 2015) after the MLP that further encodes these visual vectors.",
"We limit the maximum lengths of captions, questions and answers to be 24, 16 and 8, respectively.",
"All LSTMs have only one layer.",
"HCIAE can be trained with either supervised loss (HCIAE-G-MLE) or with multi-loss involving knowledge-transfer (HCIAE-G-DIS).",
"We test AMFE in both settings.",
"For HCIAE-G-MLE, we pre-train HCIAE model with supervised loss for 20 epochs using learning rate 4e-4, and switch to AMFE training with learning rate 5e-5.",
"For HCIAE-G-DIS, we start from the generative model trained with AMFE, together with a pre-trained HCIAE discriminative model.",
"We follow the original knowledge-transfer training schedule, and add our L adv to the original mixed loss function with weight",
"1. We use batch-size 32 for AMFE training, although the original paper used 128.",
"Other settings are kept the same.",
"For more details please see (Lu et al., 2017).",
"Table 3 shows the results on v0.9.",
"All the HCIAE results are picked from (Lu et al., 2017), and all CoAtt results are picked from (Wu et al., 2017); CoAtt-GAN-TF stands for training a CoAtt model with adversarial reinforcement learning and supervised teacher-forcing; HCIAE-AMFE stands for using AMFE on an HCIAE-G-MLE pre-trained model; HCIAE-GD-AMFE means using AMFE as an additional loss to join the HCIAE-G-DIS multi-loss training.",
"On VisDial v0.9, we observe that using AMFE on HCIAE can also boost the performances.",
"Comparing HCIAE-G-MLE and HCIAE-AMFE, we can observe the same advantage over supervised training as on HRE, indicating that our method works for different dataset scales and A-Bot struc-Model MRR R@1 R@5 R@10 Mean Answer Prior 0.374 23.55 48.52 53.23 26.50 NN-QI 0.427 33.13 50.83 58.69 19.62 LF-QIH-G 0.520 41.83 61.78 67.59 17.07 HREA-QIH-G 0.524 42.28 62.33 68.17 16.79 MN-QIH-G 0.526 42.29 62.85 68.88 17.06 CoAtt-G-MLE 0.541 44.32 63.82 69.75 16.47 CoAtt-GAN-TF 0.558 46.10 65.69 71.74 14.43 HCIAE-G-MLE 0.539 44.06 63.55 69.24 16.01 HCIAE-G-DIS 0.546 44.35 65.28 71.55 14.23 HCIAE-AMFE 0.547 44.40 65.35 71.69 14.42 HCIAE-GD-AMFE 0.554 45.42 66.09 72.30 14.11 Table 3: VisDial v0.9 evaluation results.",
"tures; comparing HCIAE-AMFE and HCIAE-G-DIS, AMFE is a competitive method for auxiliary training.",
"Combining AMFE and HCIAE-G-DIS achieves better results than previous state-of-the-art (Wu et al., 2017) on R@5, R@10 and mean rank, and comparable on MRR and R@1.",
"Besides, AMFE trains reasonably faster because we avoid the O ( N ) time cost for Monte-Carlo sampling while computing temporary rewards (Wu et al., 2017).",
"We explain the efficiency of AMFE in two aspects.",
"Firstly, AMFE is an adversarial training procedure forcing the language to be encoded into a distribution closely connected to the images.",
"With attention-based sample selection, the most informative samples from both modalities are able to transfer knowledge.",
"Secondly, like Batch Normalization, AMFE contributes to bring better numerical properties to the intermediate tensors in a network, especially on their means and variances, which could potentially benefit model performance.",
"Both the weight of adversarial loss and the attention-based sample selection are critical to good performance.",
"Table 4 shows ablation studies on these factors on HCIAE and VisDial v0.9.",
"The above results show that AMFE is especially strong at more general metrics such as R@5 and mean rank.",
"To confirm that adversarial training on hidden states can help much to generate responses that are more natural, we randomly select 100 dialog samples from both VisDial v0.5 and v0.9 dataset, and ask two human subjects to vote for the responses generated by two groups of models: HRE-MLE vs. HRE-AMFE on v0.5, and HCIAE-G-MLE vs. HCIAE-AMFE on v0.9, both with beam-size",
"5. Model names are hidden while voting.",
"We ask the human subjects to consider two metrics separately: (1) the fluency of the generated answer sentences and (2) the correctness of the answers compared to the ground-truths.",
"As shown in Table 6, AMFE wins all the votes with different metric and different models, indicating that AMFE is robust in generating more natural responses.",
"We randomly sample some dialogs from VisDial v0.5 validation set and illustrate the ground-truth answers and the generated answers with/without AMFE.",
"Two results are shown in Table",
"5. In the first example, the model trained with AMFE generates a right vs. wrong answer in the 8-th turn, and a grammatically better response in the 5-th turn, compared to supervised pre-training.",
"In the second example, the model trained with AMFE has a generally right understanding of the questions and the image, while the HRE-MLE model is generating response as if it does not see the image.",
"This indicates that encoding language features in the image space leads to better understanding on both modalities.",
"We propose AMFE: an unsupervised multi-modal feature encoding framework and its implementations on different commonly-used visual dialog models.",
"Our core idea is to force features from different modalities to have closely related distributions.",
"Experiments show that AMFE can bring performance gains to both simple and complex models on different scales of VisDial dataset.",
"Future work will possibly be visualizing the visual and language features encoded by AMFE to find more straightforward interpretations, as well as trying our method on more complex structures, discriminative models, and on discriminative tasks such as VQA and visual reasoning.",
"We thank the reviewers for their insightful comments.",
"This work was supported by the National Natural Science Foundation of China (61602479), the Beijing Brain Science Project (Z181100001518006) and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB32070000)."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"other",
"other"
] |
[
"The ICD coding task aims at assigning codes of the International Classification of Diseases in clinical notes.",
"Since manual coding is very laborious and prone to errors, many methods have been proposed for the automatic ICD coding task.",
"However, existing works either ignore the long-tail of code frequency or the noisy clinical notes.",
"To address the above issues, we propose an I nteractive S hared Representation Network with SelfD istillation mechanism.",
"Specifically, an interactive shared representation network targets building connections among codes while modeling the co-occurrence, consequently alleviating the longtail problem.",
"Moreover, to cope with the noisy text issue, we encourage the model to focus on the clinical note's noteworthy part and extract valuable information through a self-distillation learning mechanism.",
"Experimental results on two MIMIC datasets demonstrate the effectiveness of our method.",
"The International Classification of Diseases (ICD) is a healthcare classification system launched by the World Health Organization.",
"It contains a unique code for each disease, symptom, sign and so on.",
"Analyzing clinical data and monitoring health issues would become more convenient with the promotion of ICD codes (Shull, 2019) (Choi et al., 2016) (Avati et al., 2018).",
"The ICD coding task aims at assigns proper ICD codes to a clinical note.",
"It has drawn much attention due to the importance of ICD codes.",
"This task is usually undertaken by experienced coders manually.",
"However, the manually process is inclined to be labor-intensive and * Work was done during an internship at National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences.",
"error-prone (Adams et al., 2002).",
"A knowledgeable coder with medical experience has to read the whole clinical note with thousands of words in medical terms and assigning multiple codes from a large number of candidate codes, such as 15,000 and 60,000 codes in the ninth version (ICD-9) and the tenth version (ICD-10) of ICD taxonomies.",
"On the one hand, medical expert with specialized ICD coding skills is hard to train.",
"On the other hand, it is a challenge task even for professional coders, due to the large candidate code set and tedious clinical notes.",
"As statistics, the cost incurred by coding errors and the financial investment spent on improving coding quality are estimated to be $25 billion per year in the US (Lang, 2007).",
"Automatic ICD coding methods (Stanfill et al., 2010) have been proposed to resolve the deficiency of manual annotation, regarding it as a multi-label text classification task.",
"As shown in Figure 1, given a plain clinical text, the model tries to predict all the standardized codes from ICD-9.",
"Recently, neural networks were introduced (Mullenbach et al., 2018) (Falis et al., 2019) (Cao et al., 2020) to alleviate the deficiency of manual feature engineering process of traditional machine learning method (Larkey and Croft, 1996) (Perotte et al., 2014) in ICD coding task, and great progresses have been made.",
"Although effective, those methods either ignore the long-tail distribution of the code frequency or not target the noisy text in clinical note.",
"In the following, we will introduce the two characteristics and the reasons why they are critical for the automatic ICD coding.",
"Long-tail: The long-tail problem is unbalanced data distribution phenomenon.",
"And this problem is particularly noticeable in accompanied by a large target label set.",
"According to our statistics, the proportion of the top 10% high-frequency codes in MIMIC-III (John-son et al., 2016) occupied 85% of total occurrence.",
"And 22% of the codes have less than two annotated samples.",
"This is intuitive because people usually catch a cold but seldom have cancer.",
"Trained with these long-tail data, neural automatic ICD coding method would inclined to make wrong predictions with high-frequency codes.",
"Fortunately, intrinsic relationships among different diseases could be utilized to mitigate the deficiency caused by long-tail.",
"For example, Polyneuropathy in diabetes is a complication of diabetes , with a lower probability than other complications since the long term effect of vessel lesion reflect at nerve would come out in the late-stage.",
"If a model could learn shared information between polyneuropathy in diabetes and more common diseases diabetes , the prediction space would range to a set of complication of diabetes .",
"Further, utilizing the dynamic code co-occurrence, (the cascade relationship among complications of diabetes ) the confidence of predicting polyneuropathy in diabetes is gradually increased with the occurrence of vessel blockages , angina pectoris , hy-pertorphy of kidney , respectively.",
"Therefore, how to learn shared information with considering dynamic code co-occurrence characteristics, is a crucial and challenging issue.",
"Noisy text: The noisy text problem means that plentiful of information showing in clinical notes are redundant or misleading for ICD coding task.",
"Clinical notes are usually written by doctors and nurses with different writing styles, accompanied by polysemous abbreviations, abundant medication records and repetitive records of physical indicators.",
"According to our statistics 1 , about 10% of words in a clinical note contribute to the code assign task, on average.",
"Other words are abundant medication records and repetitive records of physical indicators.",
"These words are not just redundant but also misleading to the ICD coding task.",
"For 1 We randomly select 20 clinical notes in MIMIC-III and manually highlight the essential words.",
"example, two critical patients with entirely different diseases could take similar medicines and have similar physical indicators in the rescue course.",
"We argue that the noisy clinical notes are hard to read for both humans and machines.",
"Training with such noisy text would confuse the model about where to focus on, and make wrong decisions due to the semantic deviation.",
"Therefore, another challenging problem is how to deal with the noisy text in ICD coding task.",
"In this paper, we propose an I nteractive S hared Representation Network with SelfD istillation Mechanism (ISD) to address the above issues.",
"To mitigate the disadvantage caused by the longtail issue, we extract shared representations among high-frequency and low-frequency codes from clinical notes.",
"Codes with different occurrence frequencies all make binary decisions based on shared information rather than individually learning attention distributions.",
"Additional experiments indicate that those shared representations could extract common information relevant to ICD codes.",
"Further, we process the shared representations to an interaction decoder for polishing.",
"The decoder additional supervised by two code completion tasks to ensure the dynamic code co-occurrence patterns were learned.",
"To alleviate the noisy text issue, we further propose a self-distillation learning mechanism to ensure the extracted shared representations focus on the long clinical note's noteworthy part.",
"The teacher part makes predictions through constructed purified text with all crucial information; meanwhile, the student part takes the origin clinical note as a reference.",
"The student is forced to learn the teacher's shared representations with identical target codes.",
"1) We propose a framework capable of dealing with the long-tail and noisy text issues in the ICD coding task simultaneously.",
"2) To relieve the long-tail issue, we propose an interactive shared representation network, which can capture the internal connections among codes with different frequencies.",
"To handle the noisy text, we devise a self-distillation learning mechanism, guiding the model focus on important parts of clinical notes.",
"3) Experiments on two widely used ICD coding datasets, MIMIC-II and MIMIC-III, show our Figure 2: The architecture of Interactive Shared Representation Networks.",
"method outperforms state-of-the-art methods in macro F1 with 4% and 2%, respectively.",
"The source code is available at www.github.",
"com/tongzhou21/ISD .",
"ICD coding is an important task in the limelight for decades.",
"Feature based methods firstly brought to solve this task.",
"(Larkey and Croft, 1996) explored traditional machine learning algorithms, including KNN, relevance feedback, and Bayesian applying to ICD coding.",
"(Perotte et al., 2014) utilized SVM for classification in consideration of the hierarchy of ICD codes.",
"With the popularity of neural networks, researchers have proven the effectiveness of CNN and LSTM in ICD coding task.",
"(Mullenbach et al., 2018) propose a convolutional neural network with an attention mechanism to capture each code's desire information in source text also exhibit interpretability.",
"(Xie and Xing, 2018) develop tree LSTM to utilize code descriptions.",
"To further improve the performance, customized structures were introduced to utilize the code co-occurrence and code hierarchy of ICD taxonomies.",
"(Cao et al., 2020) embedded the ICD codes into hyperbolic space to explore their hierarchical na-ture and constructed a co-graph to import code co-occurrence prior.",
"We argue that they capture code co-occurrence in a static manner rather than dynamic multi-hop relations.",
"(Vu et al., 2020) consider learning attention distribution for each code and introduce hierarchical joint learning architecture to handle the tail codes.",
"Taking advantage of a set of middle representations to deal with the long-tail issue is similar to our shared representation setting, while our method enables every label to choose its desire representation from shared attention rather than its upper-level node, with more flexibility.",
"The direct solution to deal with an imbalance label set is re-sampling the training data (Japkow-icz and Stephen, 2002) (Shen et al., 2016) or re-weighting the labels in the loss function (Wang et al., 2017) (Huang et al., 2016).",
"Some studies treat the classification of tail labels as few-shot learning task.",
"(Song et al., 2019) use GAN to generate label-wise features according to ICD code descriptions.",
"(Huynh and Elhamifar, 2020) proposed shared multi-attention for multi-label image labeling.",
"Our work further constructs a label interaction module for label relevant shared representation to utilize dynamic label co-occurrence.",
"Lots of effects tried to normalize noisy texts before inputting to downstream tasks.",
"(Vateekul and Koomsubha, 2016) (Joshi and Deshpande, 2018) apply pre-processing techniques on twitter data for sentiment classification.",
"(Lourentzou et al., 2019) utilized seq2seq model for text normalization.",
"Others targeted at noisy input in an end2end manner by designing customized architecture.",
"(Sergio and Lee, 2020) (Sergio et al., 2020).",
"Different from previous works on noisy text, our method neither need extra text processing nor bring in specific parameters.",
"This section describes our interactive shared representation learning mechanism and self-distillation learning paradigm for ICD coding.",
"Figure 2 shows the architecture of interactive shared representation networks and manifest the inference workflow of our method.",
"We first encode the source clinical note to the hidden state with a multi-scale convolution neural network.",
"Then a shared attention module further extracts code relevant information shared among all codes.",
"A multi-layer bidirectional Transformer decoder insert between the shared attention representation extraction module and code prediction, establishes connections among shared code relevant representations.",
"We employ convolutional neural networks (CNN) for source text representation because the computation complexity affected by the length of clinical notes is non-negligible, although other sequen-Figure",
"sequen-Figure 3: The workflow of our method (ISD) during the training stage.",
"We take the example of training data with a clinical note and annotated four target codes.",
"tial encoders such as recurrent neural networks or Transformer(Vaswani et al., 2017) could capture longer dependency of text, theoretically.",
"CNN could encode local n-gram pattern, critical in text classification, and with high computational effi-ciency.",
"The words in source text are first mapped into low-dimensional word embedding space, constitute a matrix E = { e 1 , e 2 , ..., e N x } .",
"Note that N x is the clinical note's length, e is the word vector with dimension d e .",
"As shown in Eq.",
"1 and 2, we concatenate the convolutional representation from kernel set C = { c 1 , c 2 , ..., c S } with different size k c to hidden representation matrix H = { h 1 , h 2 , ..., h N x } with size N x d l : h c j i = tanh( W c x i : i + k cj 1 + b c j ) (1) h i = { h c 0 i ; h c 1 i ; ... ; h c S i } (2) 3.2 Shared Attention The label attention method tends to learn relevant document representations for each code.",
"We argue that the attention of rare code could not be well learned due to lacking training data.",
"Motivated by (Huynh and Elhamifar, 2020) we propose shared attention to bridge the gap between high-frequency and low-frequency codes by learning shared representations HS through attention.",
"Code set with total number of N l codes represents in code embedding E l = { e l 1 , e l 2 , ..., e lN l } according to their text descriptions.",
"A set of trainable shared queries for attention with size N q d l is introduced, noted as E q = { e q 1 , e q 2 , ..., e qN q } , where N q is the total number of shared queries as a hyperparameter.",
"Then E q calculates shared attention representation HS = { h S 1 , h S 2 , ..., h SN q } with hidden representation H in Eq.",
"3 to 5: Attention( Q, K, V ) = softmax( QKT d k V ) (3) i = Attention( e qi , H, H ) (4) h Si = H i (5) In ideal conditions, those shared representations reflect the code relevant information corresponding to the source text.",
"We can predict codes through HS .",
"Each code i has its right to choose a shared representation in HS for code-specific vector through the highest dot product score s i .",
"With the supervision of binary cross-entropy loss function, the shared representation should have",
"Above shared attention mechanism lacks interaction among code relevant information, which is of great importance in the ICD coding task.",
"We implement this interaction through a bidirectional multi-layer Transformer decoder D with an additional code completion task.",
"The shared representation HS is considered the orderless sequential input of the decoder D .",
"Each layer of the Transformer contains interaction among shared representation HS through self-attention and interaction between shared representation and source text through source sequential attention.",
"To make sure the decoder could model the dynamic code co-occurrence pattern, we propose two code set completion tasks, shown at the bottom of Figure 3.",
"(1) Missing code completion: We construct a code sequence L tgt of a real clinical note X in the training set, randomly masking one code l mis .",
"The decoder takes this code sequence as input to predict the masked code.",
"L mis = logP ( l mis | L tgt \\ l mis l mask , X ) (9) (2) Wrong code removal: Similar to the above task, we construct a code sequence L tgt , but by randomly adding a wrong code l wro .",
"The decoder is aiming to fade the wrong code's representation with a special mask representation l mask .",
"The decoder could generate purificatory code relevant information with higher rationality with the above two tasks' learning.",
"The decoder is plugged to refine the shared representation HS to HS (cid:48) , so the subsequent dot product score is calculated by HS (cid:48) .",
"s i = max ( HS (cid:48) e li ) (11) 3.4 Self-distillation Learning Mechanism We argue that learning the desired shared attention distribution over such a long clinical text is difficult, and the i tends to be smooth, brings lots of unnecessary noise information.",
"Therefore we propose a self-distillation learning mechanism showing in the gray dotted lines of Figure 3.",
"With this mechanism, the model could learn superior intermediate representations from itself without introducing another trained model.",
"Considering a single clinical note X with target code set L tgt for training, we derive two paths inputted to the model.",
"The teacher's training data consists of the text descriptions XL tgt = { X l 1 , X l 2 , ..., X lN ltgt } .",
"We handle those code descriptions separately through the encoder and concatenate them into a flat sequence of hidden state HL tgt = { H l 1 ; H l 2 ; ... ; H l Nltgt } , where N l tgt is the number of code in L tgt , so the subsequent process in our model is not affected.",
"We optimize the teacher's prediction result y tgti through binary cross-entropy loss.",
"Student takes origin clinical note X as input and also have BCE loss to optimize.",
"We assume that an origin clinical note with thousands of words contains all desired codes' information, as well as less essential words.",
"The teacher's input contains all desired information that indicates codes to be predicted without any noise.",
"Ideal shared representations obtained from attention are supposed to collect code relevant information only.",
"Hence we treat the teacher's share representation HL tgt as a perfect example to the student.",
"A distillation loss encourages those two representation sequences to be similar.",
"cosine( HA , HB ) = N (cid:88) i h Ai h Bi (cid:107) h Ai (cid:107) (cid:107) h Bi (cid:107) (13) L dist = min { 1 cosine( HS (cid:48) , HL tgt (cid:48) ) } (14) Since we treat the shared representations without order restrict, every teacher have its rights to choose a suitable student, meanwhile, considering other teachers' appropriateness.",
"It implements with Hungarian algorithm (Kuhn, 1955) to calculates the cosine distance globally minimum.",
"Where (cid:48) denotes any shuffle version of the origin representation sequence.",
"The complete training pipeline of our method is shown in Figure 3.",
"The final loss function is the Model MIMIC-III-full MIMIC-III 50 AUC F1 P@8 AUC F1 P@5 Macro Micro Macro Micro Macro Micro Macro Micro CAML 0.895 0.986 0.088 0.539 0.709 0.875 0.909 0.532 0.614 0.609 DR-CAML 0.897 0.985 0.086 0.529 0.690 0.884 0.916 0.576 0.633 0.618 MSATT-KG 0.910 0.992 0.090 0.553 0.728 0.914 0.936 0.638 0.684 0.644 MultiResCNN 0.910 0.986 0.085 0.552 0.734 0.899 0.928 0.606 0.670 0.641 HyperCore 0.930 0.989 0.090 0.551 0.722 0.895 0.929 0.609 0.663 0.632 LAAT 0.919 0.988 0.099 0.575 0.738 0.925 0.946 0.666 0.715 0.675 JointLAAT 0.921 0.988 0.107 0.575 0.735 0.925 0.946 0.661 0.716 0.671 ISD (Ours) 0.938 0.990 0.119 0.559 0.745 0.935 0.949 0.679 0.717 0.682 0 .",
"weighting sum of the above losses.",
"For fair comparison, we follow the datasets used by previous work on ICD coding (Mullenbach et al., 2018) (Cao et al., 2020), including MIMIC-II (Jouhet et al., 2012) and MIMIC-III (Johnson et al., 2016).",
"The third edition is the extension of II.",
"Both datasets contain discharge summaries that are tagged manually with a set of ICD-9 codes.",
"The dataset preprocessing process is consistent with (Mullenbach et al., 2018).",
"For MIMIC-III full dataset, there are 47719, 1631, 3372 different patients' discharge summaries for training, development, and testing, respectively.",
"Totally 8921 unique codes occur in those three parts.",
"MIMIC-III 50 dataset only retains the most frequent codes appear in full setting, leave 8067, 1574, 1730 discharge summaries for training, development, and testing, respectively.",
"MIMIC-II dataset contains 5031 unique codes divided into 20533 and 2282 clinical notes for training and testing, respectively.",
"As in previous works (Mullenbach et al., 2018), we evaluate our method using both the micro and macro, F1 and AUC metrics.",
"As well as P@8 indicates the proportion of the correctly-predicted codes in the top-8 predicted codes.",
"PyTorch (Paszke et al., 2019) is chosen for our method's implementation.",
"We perform a grid search over all hyperparameters for each dataset.",
"The parameter selections are based on the tradeoff between validation performance and training efficiency.",
"We set the word embedding size to 100.",
"We build the vocabulary set using the CBOW Word2Vec method (Mikolov et al., 2013) to pre-train word embeddings based on words in all MIMIC data, resulting in the most frequent 52254 words included.",
"The multi-scale convolution filter size is 5, 7, 9, 11, respectively.",
"The size of each filter output is one-quarter of the code embedding size.",
"We set code embedding size to 128 and 256 for the MIMIC-II and MIMIC-III, respectively.",
"The size of shared representation is 64.",
"We utilize a two-layer Transformer for the interactive decoder.",
"For the loss function, we set mis = 0 .",
"5 , mis = 5 e 4 , rem = 5 e 4 , tgt = 0 .",
"5 , and dist = 1 e 3 to adjust the scale of different supervisory signals.",
"We use Adam for optimization with an initial learning rate of 3e-4, and other settings keep the default.",
"HA-GRU: A Hierarchical Attention Gated Recurrent Unit model is proposed by (Baumel et al., 2017) to predict ICD codes on the MIMIC-II dataset.",
"CAML & DR-CAML: (Mullenbach et al., 2018) proposed the Convolutional Attention Net-Model AUC F1 P@8 Macro Micro Macro Micro ISD (Ours) 0.938 0.990 0.119 0.559 0.745 w/o distillation loss 0.935 0.986 0.103 0.551 0.743 w/o self-distillation 0.934 0.981 0.099 0.547 0.724 w/o code completion task 0.931 0.988 0.061 0.522 0.728 w/o co-occurrence decoder 0.936 0.989 0.084 0.547 0.743 Table 3: Ablation results on the MIMIC-III-full test set.",
"work for Multi-Label Classification (CAML), which learning attention distribution for each label.",
"DR-CAML indicates Description Regularized CAML, an extension incorporating the text description of codes.",
"MSATT-KG: The Multi-Scale Feature Attention and Structured Knowledge Graph Propagation was proposed by (Xie et al., 2019) They capture variable n-gram features and select multi-scale features through densely connected CNN and a multi-scale feature attention mechanism.",
"GCN is also employed to capture the hierarchical relationships among medical codes.",
"MultiResCNN: The Multi-Filter Residual Convolutional Neural Network was proposed by (Li and Yu, 2020).",
"They utilize the multi-filter convolutional layer capture variable n-gram patterns and residual mechanism to enlarge the receptive field.",
"HyperCore: Hyperbolic and Co-graph Representation was proposed by (Cao et al., 2020).",
"They explicitly model code hierarchy through hyperbolic embedding and learning code co-occurrence thought GCN.",
"LAAT & JointLAAT: (Vu et al., 2020) Label Attention model (LAAT) for ICD coding was proposed by (Vu et al., 2020), learning attention distributions over LSTM encoding hidden states for each code.",
"JointLAAT is an extension of LAAT with hierarchical joint learning.",
"The left part of Table 1 and Table 2 show the results of our method on the MIMIC-III and MIMIC-II dataset with the whole ICD code set.",
"Compared with previous methods generating attention distribution for each code, our method achieves better results on most metrics, indicating the shared attention mechanism's effectiveness.",
"It is noteworthy that the macro results have more significant improvement compare to micro than previous methods.",
"Since the macro indicators are mainly affected by tail codes' performance, our approach benefits from the interactive shared representations among codes with different frequencies.",
"Compared with the static code interaction of co-occurrence implemented in (Cao et al., 2020), our method achieves higher scores, indicating that the dynamic code interaction module could capture more complex code interactive information other than limit steps of message passing in GCN.",
"The right part of Table 1 shows the results of our method on the MIMIC-III dataset with the most frequent 50 codes.",
"It proved that our approach's performance would not fall behind with a more balanced label set.",
"To investigate the effectiveness of our proposed components of the method, we also perform the ablation experiments on the MIMIC-III-full dataset.",
"The ablation results are shown in Table 3, indicating that none of these models can achieve a comparable result with our full version.",
"Demonstrate that all those factors contribute a certain improvement to our model.",
"(1) Effectiveness of Self-distillation.",
"Specifi-cally, when we discard the whole self-distillation part (w/o self-distillation), the performance drops, demonstrate the effectiveness of the self-distillation.",
"To further investigate the contribution of the self-distillation module, whether the more training data we constructed, we retain the teacher path and remove the loss between shared representations (w/o distillation loss), the performance still slightly drops.",
"It can be concluded that although the positive effects of the constructed training data in the teacher path, the distillation still plays a role.",
"(2) Effectiveness of Shared Representation.",
"When we remove the self-distillation mechanism (w/o self-distillation), the contribution of shared representation part can be deduced compared to the performance of CAML.",
"Result showing our version still have 1.1% advantage in macro F1, indicating the effectiveness of shared representation.",
"(3) Effectiveness of Code Completion Task.",
"When we neglect the missing code completion task and wrong code removal task (w/o code completion tasks), the code interactive decoder optimizes with final prediction loss only.",
"The performance is even worse than the model without the whole code interaction module (w/o co-occurrence decoder).",
"It indicates that the additional code completion task is the guarantee of modeling dynamic code co-occurrence characteristics.",
"Further compared with the model with label attention rather than our proposed shared representations (w/o shared repre-sentation), the performance even worse, showing the code completion task is also the guarantee of the effectiveness of shared representations.",
"Without this self-supervised task, the shared information is obscure and the performance drops due to the join of dubiously oriented model parameters.",
"To further explore our proposed interactive shared attention mechanism, we conduct comparisons among various numbers of shared representations in our method.",
"And visualization the attention distribution over source text of different shared representations, as well as the information they extracted.",
"(1) The Analysis of Shared Representations Size.",
"As shown in Table 4, both large or small size would harm the final performance.",
"When the shared size is set to 1, the shared representation degrades into a global representation.",
"A single vector compelled to predict multiple codes causes the performance drops, as Table 4 shows.",
"We also initialize the shared embeddings with ICD's hierarchical parent node.",
"Specifically, there are 1159 unique first three characters in the raw ICD code set of MIMIC-III-full.",
"We initialize those shared embeddings with the mean vector of their corresponding child codes.",
"Although the hierarchical priori knowledge is introduced, the computation Clinical Note: chief complaint elective admit major surgical or invasive procedure recoiling acomm aneurysm history of present illness on she had a crushing headache but stayed at home the next day ... angiogram with embolization and or stent placement medication take aspirin 325mg ...",
"complexity and uneven node selection could cause the model to be hard to optimize and overfit high frequent parent nodes.",
"(2) Visualization of Shared Attention Distribution.",
"The attention distribution of different shared representations shown in Table 5 indicates that they have learned to focus on different source text patterns in the noisy clinical note to represent code relevant information.",
"(3) The Analysis of Self-distillation.",
"As shown in Table 6, the attention weights over clinical text learned by model with the training of self-distillation mechanism are more sharp than origin learning process.",
"In combination with Table 5, it can be concluded that the self-distillation mechanism could help the model more focus on the desire words of clinical text.",
"This paper proposes an interactive shared representation network and a self-distillation mechanism for the automatic ICD coding task, to address the long-tail and noisy text issues.",
"The shared representations can bridge the gap between the learning process of frequent and rare codes.",
"And the code interaction module models the dynamic code co-occurrence characteristic, further improving the performance of tail codes.",
"Moreover, to address the noisy text issue, the self-distillation learning mechanism helps the shared representations focus on code-related information in noisy clinical notes.",
"Experimental results on two MIMIC datasets indicate that our proposed model significantly outperforms previous state-of-the-art methods.",
"This work is supported by the National Key Research and Development Program of China (No.2017YFB1002101), the National Natural Science Foundation of China (No. 61806201, 61976211).",
"This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301), the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-SSW-JSC006), independent research project of National Laboratory of Pattern Recognition and the CCF-Tencent Open Research Fund."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"Functional Distributional Semantics provides a linguistically interpretable framework for distributional semantics, by representing the meaning of a word as a function (a binary clas-sifier), instead of a vector.",
"However, the large number of latent variables means that inference is computationally expensive, and training a model is therefore slow to converge.",
"In this paper, I introduce the Pixie Autoencoder, which augments the generative model of Functional Distributional Semantics with a graph-convolutional neural network to perform amortised variational inference.",
"This allows the model to be trained more effectively, achieving better results on two tasks (semantic similarity in context and semantic composition), and outperforming BERT, a large pre-trained language model.",
"The aim of distributional semantics is to learn the meanings of words from a corpus (Harris, 1954; Firth, 1951, 1957).",
"Many approaches learn a vector for each word, including count models and embedding models (for an overview, see: Erk, 2012; Clark, 2015), and some recent approaches learn a vector for each token in a particular context (for example: Peters et al., 2018; Devlin et al., 2019).",
"However, such vector representations do not make a clear distinction between words and the things they refer to.",
"This means that such models are challenging to interpret semantically.",
"In contrast, Functional Distributional Semantics (Emer-son and Copestake, 2016) aims to provide a framework which can be interpreted in terms of model theory, a standard approach to formal semantics.",
"Furthermore, this framework supports first-order logic, where quantifying over logical variables is replaced by marginalising out random variables (Emerson and Copestake, 2017b; Emerson, 2020b).",
"This connection to logic is a clear strength over vector-based models.",
"Even the linguistically inspired tensor-based framework of Coecke et al. (2010) and Baroni et al. (2014) cannot model quan-tifiers, as shown by Grefenstette (2013).",
"However, the linguistic interpretability of Functional Distributional Semantics comes at a computational cost, with a high-dimensional latent variable for each token.",
"Training a model by gradient descent requires performing Bayesian inference over these latent variables, which is intractable to calculate exactly.",
"The main theoretical contribution of this paper is to present an amortised variational inference algorithm to infer these latent variables.",
"This is done using a graph-convolutional network, as described in 3.",
"The main empirical contribution of this paper is to demonstrate that the resulting system, the Pixie Autoencoder, improves performance on two semantic tasks, as described in 4.",
"I also present the first published results of applying a large language model (BERT) to these tasks, showing that results are sensitive to linguistic detail in how the model is applied.",
"Despite being a smaller model trained on less data, the Pixie Autoencoder outperforms BERT on both tasks.",
"While the proposed inference network is designed for Functional Distributional Semantics, the proposed techniques should also be of wider interest.",
"From a machine learning perspective, amortised variational inference with graph convolutions (3.3) could be useful in other tasks where the input data is a graph, and the use of belief propagation to reduce variance (3.4) could be useful for training other generative models.",
"However, the most important contribution of this work is from a computational semantics perspective.",
"This paper takes an important step towards truth-conditional distributional semantics, showing that truth-conditional functions can be efficiently learnt from a corpus.",
"In this section, I summarise previous work on Functional Distributional Semantics.",
"I begin in 2.1 by introducing model-theoretic semantics, which motivates the form of the machine learning model.",
"I then explain in 2.2 how the meaning of a word is represented as a binary classifier, and finally present the probabilistic graphical model in 2.3.",
"The basic idea of model-theoretic semantics is to define meaning in terms of truth , relative to model structures .",
"A model structure can be understood as a model of the world.",
"In the simplest case, it consists of a set of individuals (also called entities ), as illustrated in Fig.",
"1. The meaning of a content word is called a predicate , and is formalised as a truth-conditional function , which maps individuals to truth values (either truth or falsehood ).",
"Because of this precisely defined notion of truth, model theory naturally supports logic, and has become a prominent approach to formal semantics.",
"For example, if we know the truth-conditional functions for pepper and red , we can use first-order logic to calculate the truth of sentences like Some peppers are red , for model structures like Fig.",
"1. For detailed expositions, see: Cann (1993); Allan (2001); Kamp and Reyle (2013).",
"Functional Distributional Semantics (Emerson and Copestake, 2016; Emerson, 2018) embeds modeltheoretic semantics into a machine learning model.",
"An individual is represented by a feature vector, called a pixie .",
"1 For example, all three red pepper individuals in Fig. 1 would be represented by the 1 Terminology introduced by Emerson and Copestake (2017a).",
"This provides a useful shorthand for feature representation of an individual.",
"same pixie, as they have the same features.",
"A predicate is represented by a semantic function , which maps pixies to probabilities of truth.",
"For example, the function for pepper should map the red pepper pixie to a probability close to",
"1. This can be seen in formal semantics as a truth-conditional function, and in a machine learning as a binary classifier.",
"This ties in with a view of concepts as abilities, as proposed in some schools of philosophy (for example: Dummett, 1976, 1978; Kenny, 2010; Sutton, 2015, 2017), and some schools of cognitive science (for example: Labov, 1973; McCloskey and Glucksberg, 1978; Murphy, 2002, pp. 13, 134 138; Zentall et al., 2002).",
"In NLP, some authors have suggested representing concepts as classifiers, including Larsson (2013), working in the framework of Type Theory with Records (Cooper, 2005; Cooper et al., 2015).",
"Similarly, Schlangen et al. (2016) and Zarrie and Schlangen (2017a,b) train image classifiers using captioned images.",
"We can also view such a classifier as defining a region in the space, as argued for by Gardenfors (2000, 2014).",
"This idea is used for distributional semantics by Erk (2009a,b), for colour terms by McMahan and Stone (2015), and for knowledge base completion by Bouraoui et al. (2017).",
"To learn semantic functions in distributional semantics, Emerson and Copestake define a probabilistic graphical model that generates semantic dependency graphs, shown in Fig.",
"3. The basic idea is that an observed dependency graph is true of some unobserved situation comprising a number of individuals.",
"Given a sembank (a corpus parsed into dependency graphs), the model can be trained unsupervised, to maximise the likelihood of generating the data.",
"An example graph is shown in Fig. 2, which corresponds to sentences like Every picture tells a story or The story was told by a picture (note that only content words have nodes).",
"More precisely, given a graph topology (a dependency graph where the edges are labelled but the nodes are not), the model generates a predicate for each node.",
"Rather than directly generating predicates, the model assumes that each predicate describes an unobserved individual.",
"2 The model 2 This assumes a neo-Davidsonian approach to event semantics (Davidson, 1967; Parsons, 1990), where verbal predicates are true of event individuals.",
"Middle row: for each individual, each predicate r in the vocabulary V is randomly true ( (cid:62) ) or false ( ), according to the predicate's semantic function.",
"Each function is modelled by a feedforward neural net.",
"Bottom row: for each individual, we randomly generate one predicate, out of all predicates true of the individual.",
"Only these nodes are observed.",
"first generates a pixie to represent each individual, then generates a truth value for each individual and each predicate in the vocabulary, and finally generates a single predicate for each individual.",
"The pixies and truth values can be seen as a probabilistic model structure, which supports a probabilistic first-order logic (Emerson and Copestake, 2017b; Emerson, 2020b).",
"This is an important advantage over other approaches to distributional semantics.",
"A pixie is defined to be a sparse binary-valued vector, with D units (dimensions), of which exactly C are active (take the value 1).",
"3 The joint distribution over pixies is defined by a Cardinality Restricted Boltzmann Machine (CaRBM) (Swer-sky et al., 2012), which controls how the active units of each pixie should co-occur with the active noun corresponds to a plural individual, which would be compatible with Link (1983)'s approach to plural semantics.",
"3 Although a pixie is a feature vector, the features are all latent in distributional semantics, in common with models like LDA (Blei et al., 2003) or Skip-gram (Mikolov et al., 2013).",
"units of other pixies in the same dependency graph.",
"A CaRBM is an energy-based model, meaning that the probability of a situation is proportional to the exponential of the negative energy of the situation.",
"This is shown in (1), where s denotes a situation comprising a set of pixies with semantic dependencies between them, and E ( s ) denotes the energy.",
"The energy is defined in (2), 4 where x l y denotes a dependency from pixie x to pixie y with label l .",
"The CaRBM includes a weight matrix w ( l ) for each label l .",
"The entry w ( l ) ij controls how likely it is for units i and j to both be active, when linked by dependency l .",
"Each graph topology has a corresponding CaRBM, but the weight matrices are shared across graph topologies.",
"Normalising the distribution in (2) is intractable, as it requires summing over all possible s .",
"The semantic function t ( r ) for a predicate r is defined to be one-layer feedforward net, as shown in (3), where denotes the sigmoid function.",
"Each predicate has a vector of weights v ( r ) .",
"Lastly, the probability of generating a predicate r for a pixie x is given in (4).",
"The more likely r is to be true, the more likely it is to be generated.",
"Normalising requires summing over the vocabulary.",
"In summary, the model has parameters w ( l ) (the world model), and v ( r ) (the lexical model).",
"These are trained on a sembank using the gradients in (5), where g is a dependency graph.",
"For w ( l ) , only the first term is nonzero; for v ( r ) , only the second term.",
"4 I follow the Einstein summation convention, where a repeated subscript is assumed to be summed over.",
"For example, x i y i is a dot product.",
"Furthermore, I use uppercase for random variables, and lowercase for values.",
"I abbreviate P ( X = x ) as P ( x ) , and I abbreviate P ( T r,X = (cid:62) ) as P ( t r,X ) .",
"A practical challenge for Functional Distributional Semantics is training a model in the presence of high-dimensional latent variables.",
"In this section, I present the Pixie Autoencoder, which augments the generative model with an encoder that predicts these latent variables.",
"For example, consider dependency graphs for The child cut the cake and The gardener cut the grass .",
"These are true of rather different situations.",
"Although the same verb is used in each, the pixie for cut should be different, because they describe events with different physical actions and different tools (slicing with a knife vs. driving a lawnmower).",
"Training requires inferring posterior distributions for these pixies, but exact inference is intractable.",
"In 3.1 and 3.2, I describe previous work: amortised variational inference is useful to efficiently predict latent variables; graph convolutions are useful when the input is a graph.",
"In 3.3, I present the encoder network, to predict latent pixies in Functional Distributional Semantics.",
"It uses the tools introduced in 3.1 and 3.2, but modified to better suit the task.",
"In 3.4, I explain how the encoder network can be used to train the generative model, since training requires the latent variables.",
"Finally, I summarise the architecture in 3.5, and compare it to other autoencoders in 3.6.",
"Calculating the gradients in (5) requires taking expectations over situations (both the marginal expectation E s , and the conditional expectation E s | g given a graph).",
"Exact inference would require summing over all possible situations, which is intractable for a high-dimensional space.",
"This is a general problem when working with probabilistic models.",
"Given an intractable distribution P ( x ) , a variational inference algorithm approximates this by a simpler distribution Q ( x ) , parametrised by q , and then optimises the parameters so that Q is as close as possible to P , where closeness is defined using KL-divergence (for a detailed introduction, see: Jordan et al., 1999).",
"However, variational inference algorithms typically require many update steps in order to optimise the approximating distribution Q .",
"An amortised variational inference algorithm makes a further approximation, by estimating the parameters q using an inference network (Kingma and Welling, 2014; Rezende et al., 2014; Titsias and Lazaro-Gredilla, 2014).",
"The inference network might not predict the optimal parameters, but the calculation can be performed efficiently, rather than requiring many update steps.",
"The network has its own parameters , which are optimised so that it makes good predictions for the variational parameters q .",
"For graph-structured input data, a standard feedforward neural net is not suitable.",
"In order to share parameters across similar graph topologies, an appropriate architecture is a graph-convolutional network (Duvenaud et al., 2015; Kearnes et al., 2016; Kipf and Welling, 2017; Gilmer et al., 2017).",
"This produces a vector representation for each node in the graph, calculated through a number of layers.",
"The vector for a node in layer k is calculated based only on the vectors in layer k 1 for that node and the nodes connected to it.",
"The same weights are used for every node in the graph, allowing the network to be applied to different graph topologies.",
"For linguistic dependency graphs, the dependency labels carry important information.",
"Marcheggiani and Titov (2017) propose using a different weight matrix for each label in each direction.",
"This is shown in (6), where: h ( k,X ) denotes the vector representation of node X in layer k ; w ( k,l ) denotes the weight matrix for dependency label l in layer k ; f is a non-linear activation function; and the sums are over outgoing and incoming dependencies.",
"5 There is a separate weight matrix w ( k,l 1 ) for a dependency in the opposite direction, and as well as a matrix w ( k, self ) for updating a node based on itself.",
"Bias terms are not shown.",
"h ( k,X ) i = f (cid:18) w ( k, self ) ij h ( k 1 ,X ) j + (cid:88) Y l X w ( k,l ) ij h ( k 1 ,Y ) j + (cid:88) Y l X w ( k,l 1 ) ij h ( k 1 ,Y ) j (cid:19) (6) 3.3 Predicting Pixies For Functional Distributional Semantics, Emerson and Copestake (2017a) propose a mean-field variational inference algorithm, where Q has an independent probability q ( X ) i of each unit i being active, for each node X .",
"Each probability is optimised based on the mean activation of all other units.",
"5 For consistency with Fig. 3, I write X for a node (a random variable), rather than x (a pixie).",
"This makes the simplifying assumption that the posterior distribution can be approximated as a single situation with some uncertainty in each dimension.",
"For example, for a dependency graph for The gardener cut the grass , three mean vectors are inferred, for the gardener, the cutting event, and the grass.",
"These vectors are contextualised, because they are jointly inferred based on the whole graph.",
"I propose using a graph-convolutional network to amortise the inference of the variational mean-field vectors.",
"In particular, I use the formulation in (6), with two layers.",
"The first layer has a tanh activation, and the second layer has a sigmoid (to output probabilities).",
"In addition, if the total activation in the second layer is above the total cardinality C , the activations are normalised to sum to C .",
"The network architecture is illustrated in Fig.",
"4. The network is trained to minimise the KL-divergence from P ( s | g ) (defined by the generative model) to Q ( s ) (defined by network's output).",
"This is shown in (7), where EQ ( s ) denotes an expectation over s under the variational distribution.",
"To minimise the KL-divergence, we can differentiate",
"differentiate with respect to the inference network parameters .",
"This gives (8), where H denotes entropy.",
"D ( Q (cid:107) P ) = EQ ( s ) (cid:2) log P ( s ) (cid:3) EQ ( s ) (cid:2) log P ( g | s ) (cid:3) H ( Q ) (8) The first term can be calculated exactly, because the log probability is proportional to the negative energy, which is a linear function of each pixie, and the normalisation constant is independent of s and Q .",
"This term therefore simplifies to the energy of the mean-field pixies, E ( E [ s ]) .",
"The last term can be calculated exactly, because Q was chosen to be simple.",
"Since each dimension is independent, it is (cid:80) q q log q + (1 q ) log(1 q ) , summing over the variational parameters.",
"The second term is more difficult, for two reasons.",
"Firstly, calculating the probability of generating a predicate requires summing over all predicates, which is computationally expensive.",
"We can instead sum over a random sample of predicates (along with the observed predicate).",
"However, by ignoring most of the vocabulary, this will overestimate the probability of generating the correct predicate.",
"I have mitigated this by upweighting this term, similarly to a -VAE (Higgins et al., 2017).",
"The second problem is that the log probability of a predicate being true is not a linear function of the pixie.",
"The first-order approximation would be to apply the semantic function to the mean-field pixie, as suggested by Emerson and Copestake (2017a).",
"However, this is a poor approximation when the distribution over pixies has high variance.",
"By approximating a sigmoid using a probit and assuming the input is approximately Gaussian, we can derive (9) (Murphy, 2012, 8.4.4.2).",
"Intuitively, the higher the variance, the closer the expected value to 1 / 2 .",
"For a Bernoulli distribution with probability q , scaled by a weight v , the variance is v 2 q (1 q ) .",
"E [ ( x )] (cid:32) E [ x ] (cid:112) 1 + 8 Var [ x ] (cid:33) (9) With the above approximations, we can calculate (4) efficiently.",
"However, because the distribution over predicates in (4) only depends on relative probabilities of truth, the model might learn to keep them all close to 0, which would damage the logical interpretation of the model.",
"To avoid this, I have modified the second term of (5) and second term of (8), using not only the probability of generating a predicate for a pixie, P ( r | x ) , but also the probability of the truth of a predicate, P ( t r,X | x ) .",
"This technique of constraining latent variables to improve interpretability is similar to how Rei and Sgaard (2018) constrain attention weights.",
"Finally, as with other autoencoder models, there is a danger of learning an identity function that gen-eralises poorly.",
"Here, the problem is that the pixie distribution for a node might be predicted based purely on the observed predicate for that node, ignoring the wider context.",
"To avoid this problem, we can use dropout on the input, a technique which has been effective for other NLP models (Iyyer et al., 2015; Bowman et al., 2016), and which is closely related to denoising autoencoders (Vincent et al., 2008).",
"More precisely, we can keep the graph topology intact, but randomly mask out the predicates for some nodes.",
"For a masked node X , I have initialised the encoder with an embedding as shown in (10), which depends on the node's dependencies (only on the label of each dependency, not on the predicate of the other node).",
"The previous section explains the inference network and how it is trained.",
"To train the generative model, the predictions of the inference network (without dropout) are used to approximate the conditional expectations E s | g in (5).",
"However, the prior expectation E s cannot be calculated using the inference network.",
"Intuitively, the prior distribution encodes a world model, and this cannot be summarised as a single mean-field situation.",
"Emerson and Copestake (2016) propose an MCMC algorithm using persistent particles, summing over samples to approximate the expectation.",
"Many samples are required for a good approximation, which is computationally expensive.",
"Taking a small number produces high variance gradients, which makes training less stable.",
"However, we can see in (5) that we don't need the prior expectation E s on its own, but rather the difference E s | g E s .",
"So, to reduce the variance of gradients, we can try to explore the prior distribution only in the vicinity of the inference network's predictions.",
"In particular, I propose taking the inference network's predictions and updating this mean-field distribution to bring it closer to the prior under the generative model.",
"This can be done using belief propagation (for an introduction, see: Yedidia et al., 2003), as applied to CaRBMs by Swersky et al. (2012).",
"For example, given the predicted mean-field vectors for a gardener cutting grass, we would modify these vectors to make the distribution more closely match what is plausible under the generative model (based on the world model, ignoring the observed predicates).",
"This can be seen as the bias-variance trade-off: the inference network introduces a bias, but reduces the variance, thereby making training more stable.",
"The Pixie Autoencoder is a combination of the generative model from Functional Distributional Semantics (generating dependency graphs from latent situations) and an inference network (inferring latent situations from dependency graphs), as illustrated in Figs.",
"3 and",
"4. They can be seen as an decoder and encoder, respectively.",
"It is trained on a sembank, with the generative model maximising the likelihood of the dependency graphs, and the inference network minimising KL-divergence with the generative model.",
"To calculate gradients, the inference network is first applied to a dependency graph to infer the latent situation.",
"The generative model gives the energy of the situation and the likelihood of the observed predicates (compared with random predicates).",
"We also calculate the entropy of the situation, and apply belief propagation to get a situation closer to the prior.",
"This gives us all terms in (5) and (8).",
"A strength of the Pixie Autoencoder is that it supports logical inference, following Emerson and Copestake (2017a).",
"This is illustrated in Fig.",
"5. For example, for a gardener cutting grass or a child cutting a cake, we could ask whether the cutting event is also a slicing event or a mowing event.",
"I have motivated the Pixie Autoencoder from the perspective of the generative model.",
"However, we can also view it from the perspective of the encoder, comparing it with a Variational Autoencoder (VAE) which uses an RNN to generate text from a latent vector (Bowman et al., 2016).",
"The VAE uses a Gaussian prior, but the Pixie Autoencoder has a structured prior defined by the world model.",
"Hoffman and Johnson (2016) find that VAEs struggle to fit a Gaussian prior.",
"In contrast, the Y Z X h ( X ) h ( Y ) h ( Z ) e ( p ) e ( q ) e ( r ) T a,Y Figure 5: An example of logical inference, building on Fig.",
"Pixie Autoencoder learns the prior, fitting the world model to the inference network's predictions.",
"Since the world model makes structural assumptions, defining energy based only on semantic dependencies, we can see the world model as a structural prior: the inference network is encouraged, via the first term in (8), to make predictions that can be modelled under these structural assumptions.",
"I have evaluated on two datasets, chosen for two reasons.",
"Firstly, they allow a direct comparison with previous results (Emerson and Copestake, 2017b).",
"Secondly, they require fine-grained semantic understanding, which starts to use the expressiveness of a functional model.",
"More open-ended tasks such as lexical substitution and question answering would require combining my model with additional components such as a semantic parser and a coreference resolver.",
"Robust parsers exist which are compatible with my model (for example: Buys and Blunsom, 2017; Chen et al., 2018), but this would be a non-trivial extension, particularly for incorporating robust coreference resolution, which would ideally be done hand-in-hand with semantic analysis.",
"Incorporating fine-grained semantics into such tasks is an exciting research direction, but beyond the scope of the current paper.",
"When reporting results, significance tests follow Dror et al. (2018).",
"I trained the model on WikiWoods (Flickinger et al., 2010; Solberg, 2012), which provides DMRS graphs (Copestake et al., 2005; Copestake, 2009) for 55m sentences (900m tokens) from the English Wikipedia (July 2008).",
"It was parsed with the English Resource Grammar (ERG) (Flickinger, 2000, 2011) and PET parser (Callmeier, 2001; Toutanova et al., 2005), with parse ranking trained on We-Science (Ytrestl et al., 2009).",
"It is updated with each ERG release; I used the 1212 version.",
"I preprocessed the data following Emerson and Copestake (2016), giving 31m graphs.",
"I implemented the model using DyNet (Neubig et al., 2017) and Pydmrs (Copestake et al., 2016).",
"6 I initialised the generative model following Emerson and Copestake (2017b) using sparse PPMI vectors (QasemiZadeh and Kallmeyer, 2016).",
"I first trained the encoder on the initial generative model, then trained both together.",
"I used L2 regularisation and the Adam optimiser (Kingma and Ba, 2015), with separate L2 weights and learning rates for the world model, lexical model, and encoder.",
"I tuned hyperparameters on the RELPRON dev set (see 4.3), and averaged over 5 random seeds.",
"BERT (Devlin et al., 2019) is a large pre-trained language model with a Transformer architecture (Vaswani et al., 2017), trained on 3.3b tokens from the English Wikipedia and BookCorpus (Zhu et al., 2015).",
"It produces high-quality contextualised embeddings, but its architecture is not motivated by linguistic theory.",
"I used the version in the Transformers library (Wolf et al., 2019).",
"To my knowledge, large language models have not previously been evaluated on these datasets.",
"The RELPRON dataset (Rimell et al., 2016) consists of terms (such as telescope ), paired with up to 10 properties (such as device that astronomer use ).",
"The task is to find the correct properties for each term.",
"There is large gap between the state of the art (around 50%) and the human ceiling (near 100%).",
"The dev set contains 65 terms and 518 properties; the test set, 73 terms and 569 properties.",
"The dataset is too small to train on, but hyperparameters can be tuned on the dev set.",
"The dev and test terms are disjoint, to avoid high scores from overtuning.",
"6 https://gitlab.com/guyemerson/pixie Model Dev Test Previous work Vector addition (Rimell et al., 2016) .496 .472 Simplified Practical Lexical Function (Rimell et al., 2016) .496 .497 Vector addition (Czarnowska et al., 2019) .485 .475 Dependency vector addition (Czarnowska et al., 2019) .497 .439 Semantic functions (Emerson and Copestake, 2017b) .20 .16 Sem-func & vector ensemble (Emerson and Copestake, 2017b) .53 .49 Baselines Vector addition .488 .474 BERT (masked prediction) .206 .186 BERT (contextual prediction) .093 .134 BERT (masked prediction) & vector addition ensemble .498 .479 Proposed approach Pixie Autoencoder .261 .189 Pixie Autoencoder & vector addition ensemble .532 .489 Table 1: Mean Average Precision (MAP) on RELPRON development and test sets.",
"Previous work has shown that vector addition performs well on this task (Rimell et al., 2016; Czarnowska et al., 2019).",
"I have trained a Skip-gram model (Mikolov et al., 2013) using the Gen-sim library ( Reh u rek and Sojka, 2010), tuning weighted addition on the dev set.",
"For the Pixie Autoencoder, we can view the task as logical inference, finding the probability of truth of a term given an observed property.",
"This follows Fig. 5, applying the term a to either X or Z , according to whether the property has a subject or object relative clause.",
"BERT does not have a logical structure, so there are multiple ways we could apply it.",
"I explored many options, to make it as competitive as possible.",
"Following Petroni et al. (2019), we can rephrase each property as a cloze sentence (such as a device that an astronomer uses is a [MASK] . ).",
"However, RELPRON consists of pseudo-logical forms, which must be converted into plain text query strings.",
"For each property, there are many possible cloze sentences, which yield different predictions.",
"Choices include: grammatical number, articles, relative pronoun, passivisation, and position of the mask.",
"I used the Pattern library (Smedt and Daelemans, 2012) to inflect words for number.",
"Results are given in Table",
"1. The best performing BERT method uses singular nouns with a / an , despite sometimes being ungrammatical.",
"My most careful approach involves manually choosing articles (e.g. a device , the sky , water ) and number (e.g. plural people ) and trying three articles for the masked term ( a , an , or no article, taking the highest probability from the three), but this actually lowers dev set performance to .192.",
"Using plurals lowers performance to .089.",
"Surprisingly, using BERT large (instead of BERT base) lowers performance to .165.",
"As an alternative to cloze sentences, BERT can be used to predict the term from a contextualised embedding.",
"This performs worse (see Table 1), but the best type of query string is similar.",
"The Pixie Autoencoder outperforms previous work using semantic functions, but is still outperformed by vector addition.",
"Combining it with vector addition in a weighted ensemble lets us test whether they have learnt different kinds of information.",
"The ensemble significantly outperforms vector addition on the test set ( p < 0 . 01 for a permutation test), while the BERT ensemble does not ( p > 0 . 2 ).",
"However, it performs no better than the ensemble in previous work.",
"This suggests that, while the encoder has enabled the model to learn more information, the additional information is already present in the vector space model.",
"RELPRON also includes a number of confounders , properties that are challenging due to lexical overlap.",
"For example, an activity that soil supports is farming , not soil .",
"There are 27 confounders in the test set, and my vector addition model places all of them in the top 4 ranks for the confounding term.",
"In contrast, the Pixie Autoencoder and BERT do not fall for the confounders, with a mean rank of 171 and 266, respectively.",
"Nonetheless, vector addition remains hard to beat.",
"As vector space models are known to be good at topical relatedness (e.g. learning that astronomer and telescope are related, without necessarily learning how they are related), a tentative conclusion is that relatedness is missing from the contextualised models (Pixie Autoencoder and BERT).",
"Finding a principled way to integrate a notion of topic would be an interesting task for future work.",
"The GS2011 dataset evaluates similarity in context (Grefenstette and Sadrzadeh, 2011).",
"It comprises pairs of verbs combined with the same subject and object (for example, map show location and map express location ), annotated with similarity judgements.",
"There are 199 distinct pairs, and 2500 judgements (from multiple annotators).",
"Care must be taken when considering previous work, for two reasons.",
"Firstly, there is no development set.",
"Tuning hyperparameters directly on this dataset will lead to artificially high scores, so previous work cannot always be taken at face value.",
"For example, Hashimoto et al. (2014) report results for 10 settings.",
"I nonetheless show the best result in Table",
"2. My model is tuned on RELPRON (4.3).",
"Secondly, there are two ways to calculate correlation with human judgements: averaging for each distinct pair, or keeping each judgement separate.",
"Both methods have been used in previous work, and only Hashimoto et al. (2014) report both.",
"For the Pixie Autoencoder, we can view the task as logical inference, following Fig.",
"5. However, Van de Cruys et al. (2013) point out that the second verb in each pair is often nonsensical when combined with the two arguments (e.g. system visit criterion ), and so they argue that only the first verb should be contextualised, and then compared with the second verb.",
"This suggests we should apply logical inference only in one direction: we should find the probability of truth of the second verb, given the first verb and its arguments.",
"As shown in Table 2, this gives better results than applying logical inference in both directions and averaging the probabilities.",
"Logical inference in both directions allows a direct comparison with Emerson and Copestake (2017b), showing the Pixie Autoencoder performs better.",
"Logical inference in one direction yields state-of-the-art results on par with the best results of Hashimoto et al. (2014).",
"There are multiple ways to apply BERT, as in 4.3.",
"One option is to calculate cosine similarity of contextualised embeddings (averaging if tokenised into word-parts).",
"However, each subject-verb-object triple must be converted to plain text.",
"Without a dev set, it is reassuring that conclusions from RELPRON carry over: it is best to use singular nouns with a / an (even if ungrammatical) and it is best to use BERT base.",
"Manually choosing articles and number lowers performance to .320 (separate), plural nouns to .175, and BERT large to .226.",
"Instead of using cosine similarity, we can predict the other verb from the contextualised embedding, but this performs worse.",
"The Pixie Autoencoder outperforms BERT, significantly for separate scores ( p < 0 . 01 for a bootstrap test), but only suggestively for averaged scores ( p = 0 . 18 ).",
"I have presented the Pixie Autoencoder, a novel encoder architecture and training algorithm for Functional Distributional Semantics, improving on previous results in this framework.",
"For GS2011, the Pixie Autoencoder achieves state-of-the-art results.",
"For RELPRON, it learns information not captured by a vector space model.",
"For both datasets, it outperforms BERT, despite being a shallower model with fewer parameters, trained on less data.",
"This points to the usefulness of building semantic structure into the model.",
"It is also easy to apply to these datasets (with no need to tune query strings), as it has a clear logical interpretation."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Efficient structure encoding for graphs with labeled edges is an important yet challenging point in many graph-based models.",
"This work focuses on AMR-to-text generation A graph-to-sequence task aiming to recover natural language from Abstract Meaning Representations (AMR).",
"Existing graph-to-sequence approaches generally utilize graph neural networks as their encoders, which have two limitations: 1) The message propagation process in AMR graphs is only guided by the first-order adjacency information.",
"2) The relationships between labeled edges are not fully considered.",
"In this work, we propose a novel graph encoding framework which can effectively explore the edge relations.",
"We also adopt graph attention networks with higher-order neighborhood information to encode the rich structure in AMR graphs.",
"Experiment results show that our approach obtains new state-of-the-art performance on English AMR benchmark datasets.",
"The ablation analyses also demonstrate that both edge relations and higher-order information are beneficial to graph-to-sequence modeling.",
"Abstract Meaning Representation (Banarescu et al., 2013) is a sentence-level semantic representation formalized by a rooted directed graph, where nodes are concepts and edges are semantic relations.",
"Since AMR is a highly structured meaning representation, it can promote many semantic related tasks such as machine translation (Song et al., 2019) and summarization (Liao et al., 2018).",
"However, the usage of AMR graphs can be challenging, since it is non-trivial to completely capture the rich structural information in the graph-based data, especially when the graph has labeled edges.",
"Generation from AMR aims to translate the AMR semantics into the surface form (natural lan-guage).",
"It is a basic Graph-to-sequence task that directly takes AMR as input.",
"Figure 1 (left) gives a standard AMR graph and its corresponding surface form.",
"Early works utilize sequence-to-sequence framework by linearizing the entire graph (Konstas et al., 2017; Cao and Clark, 2019).",
"Such representation may lose useful structural information.",
"In recent studies, graph neural networks (GNNs) have been in a dominant position on this task and achieved state-of-the-art performance (Beck et al., 2018; Song et al., 2018; Guo et al., 2019; Damonte and Cohen, 2019).",
"However, In these GNN-based models, the representation of each concept node is only updated by the aggregated information from its neighbors, which leads to two limitations: 1) The interaction between indirectly connected nodes heavily relies on the number of stacked layers.",
"When the graph size becomes larger, the dependencies between distant AMR concepts cannot be fully explored.",
"2) They only focus on modeling the relations between concepts while ignoring edge relations and their structures.",
"Zhu et al. (2019) and Cai and Lam (2019) use Transformer to model arbitrary concept pairs no matter whether directly connected or not, but they still ignore the topological structures of the edges in the entire AMR graph.",
"To address the above limitations, we propose a novel graph-to-sequence model based on graph attention networks (Velickovic et al., 2018).",
"We transform the edge labels into relation nodes and construct a new graph that directly reflects the edge relations.",
"In graph theory, such a graph is called a Line Graph (Harary and Norman, 1960).",
"As illustrated in Figure 1, we thus separate the original AMR graph into two sub-graphs without labeled edges concept graph and relation graph.",
"The two graphs describe the dependencies of AMR concepts and edges respectively, which is helpful in modeling these relationships (especially for edges).",
"Our model takes these sub-graphs as inputs, and the communications between the two graphs are based on the attention mechanism.",
"Furthermore, for both graphs, we mix the higher-order neighborhood information into the corresponding graph encoders in order to model the relationships between indirectly connected nodes.",
"Empirical study on two English benchmark datasets shows that our model reaches state-of-the-art performance with 30.58 and 32.46 BLEU scores on LDC2015E86 and LDC2017T10, respectively.",
"In summary, our contributions include: We propose a novel graph-to-sequence model, which firstly uses the line graph to model the relationships between AMR edges.",
"We integrate higher-order neighborhood information into graph encoders to model the relationships between indirectly connected nodes.",
"We demonstrate that both higher-order neighborhood information and edge relations are important to graph-to-sequence modeling.",
"In this section, we first introduce graph attention networks (GATs) and their mix-order extensions, which are the basis of our proposed model.",
"GAT is a special type of networks that operates on graph-structured data with attention mechanisms.",
"Given a graph G = ( V, E ) , where V and E are ( ) ( ) Figure 2: Neighborhood information in different orders.",
"the set of nodes x i and the set of edges ( e ij , (cid:96) e ) 1 , respectively.",
"N ( x i ) denote the nodes which are directly connected by x i .",
"N + ( x i ) is the set including x i and all its direct neighbors.",
"we have N + ( x i ) = N ( x i ) { x i } .",
"Each node x i in the graph has an initial feature h 0 i R d , where d is the feature dimension.",
"The representation of each node is iteratively updated by the graph attention operation.",
"At the l -th step, each node x i aggregates context information by attending over its neighbors and itself.",
"The updated representation h li is calculated by the weighted average of the connected nodes: h li = (cid:88) x j N + ( x i ) ij h l 1 j W l , (1) where attention coefficient ij is calculated as: ij = softmax j (cid:16) h l 1 i W lt 1 (cid:17) (cid:16) h l 1 j W lt 2 (cid:17) T (2) where is a nonlinear activation function, e.g. ReLU.",
"W l , W lt 1 and W lt 2 R d d are learnable parameters for projections.",
"After L steps, each node will finally have a context-aware representation h Li .",
"In order to achieve a stable training process, we also employ a residual connection followed by layer normalization between two graph attention layers.",
"The relations between indirectly connected nodes are ignored in a traditional graph attention layer.",
"Mix-Order GAT, however, can explore these relationships in a single-step operation by mixing the higher-order neighborhood information.",
"We first give some notations before describing the details of the Mix-Order GAT.",
"We use RK = (cid:8) R 1 , ...",
"RK (cid:9) to represent neighborhood information from order 1 (cid:96) e is the edge label which are not considered in the GAT layer Mix-Order GAT Mix-Order GAT run-02 fast-02 he equal wind ARG0 ARG1-of compare degree M Masked Cross Attention Graphs Self-Updating Self-Attention Attention Output Previous Output.",
"1 to order K .",
"R k ( x i ) denotes the k -th order neighborhood, which means all nodes in R k ( x i ) are reachable for x i within k hops ( k 1 ).",
"R 1 ( x i ) = N + ( x i ) , and as illustrated in Figure 2, we can have: R k ( x i ) = (cid:91) x j R k 1 ( x i ) N + ( x j ) .",
"The K -Mix GAT integrates the neighborhood information RK .",
"At the l -th update step, each x i will interact with its reachable neighbors with different orders and calculate the attentive features independently.",
"The representation h li is updated by the concatenated features from different orders, i.e. h li = MixGAT l ( h l 1 i , RK ) = K (cid:110) k =1 (cid:88) x j R k ( x i ) kij h l 1 j W lk , (4) where (cid:102) represents concatenation, kij are the attention weights in the k -th order, and W lk R d d/K are learnable weights for projections.",
"We will use MixGAT ( ) to denote the Mix-Order GAT layer in the following section.",
"The architecture of our method is illustrated in Figure 3.",
"As mentioned above, we separate the AMR graph into two sub-graphs without labeled edges.",
"Our model follows the Encoder-Decoder architecture, where the encoder takes the two sub-graphs as inputs, and the decoder generates corresponding text from the encoded information.",
"We first give some detailed explanations about the line graph and input representation.",
"The line graph of a graph G is another graph L ( G that represents the adjacencies between edges of G",
"L ( G ) is defined as: Each node of L ( G ) represents an edge of G Two nodes of L ( G ) are adjacent if and only if their corresponding edges share a common node in G .",
"For directed graphs, the directions are maintained in the corresponding line graphs.",
"Redundant edges between two relation nodes are removed in the line graphs.",
"Figure 4 provides several examples.",
"In our model, we use the line graph to organize labeled edges and transform the original AMR graph into two sub-graphs.",
"Given an AMR graph G a = ( V a , E a ) , we separate it into concept graph G c = ( V c , E c ) and relation graph G e = ( V e , E e ) , where G e = L ( G a ) .",
"As for concept graph G c , its topological structure is the same with G a , but the edge labels are eliminated, i.e. V c = V a ; E c = E a , (5) Where E a is the edge set without label information.",
"Both G c and G e have no labeled edges, which can be efficiently encoded by Mix-Order GAT.",
"We use R Kc and R Ke to denote 1 K orders neighborhood information of G c and G e .",
"We represent each concept node x i V c with an initial embedding c 0 i R d , and each relation node y i V e (cid:0) e1 e2 e1 e2 e1 e2 e1 e2 original graph line graph original graph line graph Figure 4: Examples of finding line graphs.",
"with an embedding e 0 i R d .",
"The sets of node embeddings are denoted as C 0 = { c 0 i } mi =1 and E 0 = { e 0 i } ni =1 , where m = | V c | and n = | V e | denote the numbers of concept nodes and relation nodes, respectively.",
"Thus, the inputs of our system can be formulated by I = (cid:8) C 0 , E 0 , R Kc , R Ke (cid:9) .",
"The encoder of our system consists of N stacked graph encoding layers.",
"As illustrated in Figure 3, each graph encoding layer has two parts: self-updating for each graph and masked cross attention.",
"For G c and G e , We use C l 1 = { c l 1 i } mi =1 and E l 1 = { e l 1 i } ni =1 to denote the input node embeddings of the l -th encoding layer.",
"The representations of the two graphs are updated independently by mix-order graph attention networks (MixGAT).",
"At the l -th step (layer), we have: (cid:126) C lself = MixGAT lc 1 ( C l 1 , R Kc ) , (cid:126) E lself = MixGAT le 1 ( E l 1 , R Ke ) .",
"Where (cid:126) C lself and (cid:126) E lself are updated representations according to the mix-order neighborhood information R Kc and R Ke .",
"One thing should be noticed is that both G c and G e are directed graphs.",
"This implies that the information propagation in the graph is in a top-down manner, following the pre-specified direction.",
"However, unidirectional propagation loses the structural information in the reversed direction.",
"To build communication in both directions, we employ Dual Graph (Ribeiro et al., 2019).",
"Dual graph has the same node representations but reversed edge directions compared to the original graph.",
"For example, if edge A B is in the original graph, it turns to B A in the corresponding dual graph.",
"Since dual graphs have the same node representations, we only need to change the neighborhood information.",
"Denote (cid:101) G c and (cid:101) G e as the dual graph of G c and G e .",
"(cid:101)",
"R Kc and (cid:101) R Ke are the corresponding neighborhood information.",
"We have: (cid:126) C l self = MixGAT lc 2 ( C l 1 , (cid:101) R Kc ) , (cid:126) E l self = MixGAT le 2 ( E l 1 , (cid:101) R Ke ) .",
"Since we have updated the node embeddings in two directions, the final representations of the independent graph updating process are the combination of the bi-directional embeddings, i.e.",
"where W lc 1 and W 1 e 1 R 2 d d are trainable matrix for projections.",
"C lself R m d and E lself R n d are results of the self-updating process.",
"Self updating for G c and G e can model the relationships of AMR concepts and edge respectively.",
"However, it is also necessary to explore the dependencies between concept nodes and relation nodes.",
"As a result, the cross-graph communication between G c and G e is very important.",
"From the structure of the original AMR graph, we can easily build alignment between G c and G e .",
"A relation node y i is directly aligned to a concept node x i if x i is the start-point/end-point of the edge corresponding to y i .",
"As illustrated in Figure 1, ARG0 is the edge between run-02 and he .",
"As a result, node ARG0 in G e is directly connect to run-02 and he in G c .",
"We apply the attention mechanism to complete the interaction between the two graphs, and use M R n m to mask the attention weights of unaligned pairs between G c and G e .",
"For element m ij in M , we let m ij = 0 if y i V e is aligned to x j V c , otherwise m ij = .",
"The masked cross attention is employed between the representation sets E lself and C lself , and the matrix of attention weights A l can be calculated as: A l = (cid:16) E lself W la 1 (cid:17) (cid:16) C lself W la 2 (cid:17) T + M , (9) where W la 1 and W la 2 R d d are learnable projection matrixes.",
"The weight scores of unaligned pairs are set to according to M .",
"For nodes in E lself , the relevant representation from C lself is identified using A l as: E lcross = softmax ( A l ) C lself , (10) where E lcross R n d is the masked weighted summation of C lself .",
"The same calculation is performed for nodes in C lself as: C lcross = softmax ( A Tl ) E lself .",
"The final outputs of a graph encoding layer are the combination of the original embeddings and the context representations from another graph.",
"We also employ the outputs from previous layer as residual inputs, i.e. C l = FFN (cid:16)(cid:104) C lself ; C lcross (cid:105) W lc 2 + C l 1 (cid:17) , E l = FFN (cid:16)(cid:104) E lself ; E lcross (cid:105) W le 2 + E l 1 (cid:17) , (12) where FFN is a feed-forward network consists of two linear transformations.",
"After N -stacked graph encoding layers, The two graphs G c and G e are finally encoded as CN and EN .",
"The decoder of our system is similar to the Transformer decoder.",
"At each generation step, the representation of the output token is updated by multiple rounds of attention with the previously-generated tokens and the encoder outputs.",
"Note that the outputs of our graph encoder have two parts: concept representations CN and the relation representations EN .",
"For generation, concept information is more important, since the concept graph directly contains the natural words.",
"With the multi-step cross attention, CN also caries abundant relation information.",
"For simplicity, we only use CN as the encoder output on the decoder side 2 .",
"To address the data sparsity issue in sequence generation, we employ the Byte Pair Encoding (BPE) (Sennrich et al., 2016) following the settings of Zhu et al. (2019).",
"We split the word nodes in AMR graphs and reference sentences into sub-words, and the decoder vocabulary is shared with the encoder for concept graphs.",
"Data and preprocessing We conduct our experiments with two benchmark datasets: LDC2015E85 and LDC2017T10.",
"The two datasets contain 2 We also implement a version which considers both CN and EN , and achieve similar results 16833 and 36521 training samples, and they use a common development set with 1368 samples and a common test set with 1371 samples.",
"We segment natural words in both AMR graphs and references into sub-words.",
"As a result, a word node in AMR graphs may be divided into several sub-word nodes.",
"We use a special edge subword to link the corresponding sub-word nodes.",
"Then, for each AMR graph, we find its corresponding line graph and generate G c and G e respectively.",
"Training details For model parameters, the number of graph encoding layers is fixed to 6, and the representation dimension d is set to 512.",
"We set the graph neighborhood order K = 1 , 2 and 4 for both G c and G e .",
"The Transformer decoder is based on Open-NMT (Klein et al., 2018), with 6 layers, 512 dimensions and 8 heads.",
"We use Adam (Kingma and Ba, 2015) as our optimizer and = (0 . 9 , 0 . 98) .",
"The learning rate is varied over the course of training, similar with Vaswani et al. (2017): lr = d 0 .",
"where t denotes the accumulative training steps, and w indicates the warmup steps.",
"We use w = 16000 and the coefficient is set to 0.75.",
"As for batch size, we use 80 for LDC2015E86 and 120 for LDC2017T10.",
"3 4.2 Results We compare our system with several baselines, including traditional sequence-to-sequence models, several graph-to-sequence models with multiple graph encoders, and transformer-based models.",
"All models are trained on the single dataset without ensemble or additional unlabeled data.",
"For performance evaluation, we use BLEU (Papineni et al., 2002) as our major metric.",
"We also use Meteor (Banerjee and Lavie, 2005), which considers the synonyms between predicted sentences and references.",
"The experimental results on the test sets of LDC2015E86 and LDC2017T10 are reported in Table 1.",
"As we can see, Sequence-based models perform the worst, since they lose useful structural information in graphs.",
"Graph-based models get better results with varied graph encoders to capture the structural information in graphs.",
"Transformer-based models reach previous state-of-the-art with structure-aware self-attention approach to better modeling the relations between indirectly connected concepts.",
"Comparing to previous studies, our approach with K = 4 order neighborhood information reaches the best BLEU scores, improving over the state-of-the-art model (Zhu et al., 2019) by 0.92 on both datasets.",
"Similar phenomena can be found on the additional metrics of Meteor.",
"As mentioned above, our system has two critical points: higher-order graph neighborhood information and relationships between AMR edges.",
"To verify the effectiveness of these two settings, we conduct a series of ablation tests based on different characteristics of graphs.",
"Higher order neighborhood information includes the relationships between indirectly connected nodes.",
"Table 2 shows the connectivity of the concept graphs under different orders.",
"When K = 1 , each node can reach 24.91% of the other nodes directly in the graph (LDC2015E86), and it grows to 41.67% when K = 4 .",
"As suggested in Table 1, if graph nodes only interact with their direct neighbors ( K = 1 ), it performs worse than previous Transformer-based models.",
"However, significant improvement can be observed when we integrate higher-order neighborhood information.",
"As K grows form 1 to 4, the BLEU score increases 1.94 and 2.50 on LDC2015E86 and LDC2017T10, respectively.",
"As mentioned above, if only consider the first-order neighborhood, the dependencies between distant AMR concepts cannot be fully explored when the graph size becomes larger.",
"To verify this hypothesis, we split the test set into different parts according to the AMR graph size (i.e. number of concepts).",
"We evaluate our models with order 23 26 29 32 35 0~1 2~3 4~5 >5 BLEU Ke=0 Ke=4 25 29 33 37 41 1~10 11~20 21~30 >30 BLEU Ke=0 Ke=4 Figure 6: BLEU variation between models with different K e with respect to size of AMR graph and (left) and reentrancy numbers (right).",
"K = 4 and K = 1 on different partitions.",
"All models are trained on LDC2015E86 set.",
"Figure 5 shows the result.",
"The model with K = 4 significantly outperforms the one with K = 1 .",
"Furthermore, we can find that the performance gap between the two models increases when the graph gets bigger.",
"As a result, higher-order neighborhood information does play an important role in graph-to-sequence generation, especially for larger AMR graphs.",
"We are the first one to consider the relationships between labeled edges in AMR graph by integrating the line graph (relation graph) G e in our system.",
"This section will deeply analyze the effectiveness of this contribution.",
"In previous settings, the graph neighborhood order K is the same for both G c and G e .",
"To conduct the ablation test, we fix the neighborhood order K c for G c and vary the order K e for relation graph G e .",
"We set K e = 0 , 1 and 4 , where K e = 0 indicates that the relation nodes in G e can only interact with itself.",
"This means the dependencies between AMR edges are completely ignored, and the edge information is simply combined with the corresponding concepts.",
"We report the results on both test sets in Table 3.",
"edges ( K e = 0), there is a significant performance degradation: 1.69 and 1.38 BLEU score decline on LDC2015E86 and LDC2017T10 respectively.",
"The performance gets better when K e > 0 , which means the edge relations do bring benefits to the graph encoding and sequence generation.",
"When K e = 4 , the edge relations are fully explored in varied neighborhood orders, and it reaches the best performance on both datasets.",
"Performance test on different partitions of AMR graph size (Figure 6, left) also suggests that relationships of edges are helpful when the graph becomes larger.",
"We also study the effectiveness of edge relations when handling reentrancies.",
"Reentrancies are the nodes with multiple parents.",
"Such structures are identified as very difficult aspects in AMR graph (Damonte and Cohen, 2019).",
"We think the relation graph G e is helpful in exploring different dependencies with the same concept, which can bring benefits to those graphs containing more reentrancies.",
"To test this hypothesis, we also split the test set into different parts according to their numbers of reentrancies and evaluate our models with K e = 4 and K e = 0 on different partitions.",
"As shown in Figure 6 (right), the gap becomes wide when the number of reentrancies grows to 5.",
"Also, compare to the graph size, edge relations are more important in handling graphs with reentrancies.",
"In Example",
"(a), two different nodes have same concept compete , but they have different forms in the corresponding natural language.",
"According to the references, one is for competitors and the other is for competition.",
"Our model with K e = 0 fails to distinguish the difference and generate two (f / feel-02 :ARG0 (h / he) :ARG1 (p / person :quant (m / more) :ARG0-of (c / compete-01 ) :ARG1-of (n / new-01) :source (c2 / country :poss (w / we))) :ARG0-of (p2 / participate-01 :ARG1 (c3 / compete-01 :mod (t / this))))) Reference : he felt that , there were more new competitors from our country participating in this competition .",
"K e = 0 : he feels more competition from our country who participate in this competition .",
"K e = 4 : he feels that more new competitors from our country who participate in this competition .",
"competition in the output.",
"However, model with K e = 4 successfully recover word competitors from the context of the AMR graph.",
"In Example",
"(b), the concept they has two parents with the same concept want .",
"Though our model with K e = 0 successfully finds they is the subject of the both two want , it fails to recognize the parallel relationship between the objects money and face and regard face as a verb.",
"In the contrast, our model with K e = 4 perfectly finds the parallel structure in the AMR graph and reconstructs the correct sentence.",
"In Example",
"(c), we compare our best model with two baselines: GCNSEQ (Damonte and Cohen, 2019) and Structural Transformer (Denote as ST-Transformer) from Zhu et al. (2019).",
"The AMR graph in Example",
"(b) has two reentrancies, which makes it more difficult to recover the corresponding sentence.",
"As we can see, traditional graph-based model GCNSEQ cannot predict the correct subject of the predicate can .",
"Structural-Transformer uses the correct subject, but the recovered sentence is quite disfluent because of the redundant people .",
"This overgeneration problem is mainly caused by reentrancies (Beck et al., 2018).",
"However, our model can effectively handle this problem and generates a proper sentence with correct semantics.",
"AMR-to-text generation is a typical graph-to-sequence task.",
"Early research employs rule-based methods to deal with this problem.",
"Flanigan et al. (2016) use two-stage method by first split the graphs into spanning trees and use multiple tree transducers to generate natural language.",
"Song et al. (2017) use heuristic extraction algorithm to learn graph-to-string rules.",
"More works frame graph-to-sequence as a translation task and use either phrase-based (Ferreira et al., 2017; Pour-damghani et al., 2016) or neural-based (Konstas et al., 2017) models.",
"These methods usually need to linearize the input graphs by means of a depth-first traversal.",
"Cao and Clark (2019) get a better sequence-based model by leveraging additional syntactic information.",
"Moving to graph-to-sequence approaches, Marcheggiani and Perez-Beltrachini (2018) first show that graph neural networks can significantly improve the generation performance by explicitly encoding the structure of the graph.",
"Since than, models with variant graph encoders have been proposed in recent years, such as graph LSTM (Song et al., 2018), gated graph neural networks (GGNN) (Beck et al., 2018) and graph convolutional neural networks (Damonte and Cohen, 2019).",
"Guo et al. (2019) introduce dense connectivity to allow the information exchange across different of layers.",
"Ribeiro et al. (2019) learn dual representations capturing top-down and bottom-up adjuvant view of the graph, and reach the best performance in graph-based models.",
"Despite the great success of graph neural networks, they all restrict the update of node representation based on only first-order neighborhood and rely on stacked layers to model the relationships between indirectly connected nodes.",
"To solve this problem, recent studies extend the Transformer (Vaswani et al., 2017) to encode the graph structure.",
"Zhu et al. (2019) and Cai and Lam (2019) use relation-aware self-attention to encode structural label sequences of concept pairs, which can model arbitrary concept pairs no matter whether directly connected or not.",
"With several mechanisms such as sub-word (Sennrich et al., 2016) and shared vocabulary, Zhu et al. (2019) achieved state-of-the-art performance on this task.",
"Our model follows the same spirit of exploring the relations between indirectly connected nodes, but our method is substantially different: (1) we use a graph-based method integrated with higher-order neighborhood information while keeping the explicit structure of graphs.",
"(2) we first consider the relations between labeled edges by introducing line graphs.",
"In this work, we presented a novel graph-to-sequence approach which uses line graph to model the relationships between labeled edges from the original AMR graph.",
"The mix-order graph attention networks are found effective when handling indirectly connected nodes.",
"The ablation studies also demonstrate that exploring edge relations brings benefits to graph-to-sequence modeling.",
"Furthermore, our framework can be efficiently applied to other graph-to-sequence tasks such as WebNLG (Gardent et al., 2017) and syntax-based neural machine translation (Bastings et al., 2017).",
"In future work we would like to do several experiments on other related tasks to test the versatility of our framework.",
"Also, we plan to use large-scale unlabeled data to improve the performance further.",
"We thank the anonymous reviewers for their thoughtful comments.",
"This work has been supported by the National Key Research and Development Program of China (Grant No. 2017YFB1002102) and Shanghai Jiao Tong University Scientific and Technological Innovation Funds (YG2020YQ01)."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other",
"other"
] |
[
"Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific levels of linguistic units.",
"This work introduces universal language representation learning, i.e., embeddings of different levels of linguistic units or text with quite diverse lengths in a uniform vector space.",
"We propose the training objective MiSAD that utilizes meaningful n -grams extracted from large unlabeled corpus by a simple but effective algorithm for pre-trained language models.",
"Then we empirically verify that well designed pretraining scheme may effectively yield universal language representation, which will bring great convenience when handling multiple layers of linguistic objects in a unified way.",
"Especially, our model achieves the highest accuracy on analogy tasks in different language levels and significantly improves the performance on downstream tasks in the GLUE benchmark and a question answering dataset.",
"In this paper, we propose universal language representation (ULR) that uniformly embeds linguistic units in different hierarchies in the same vector space.",
"A universal language representation model encodes linguistic units such as words, phrases or sentences into fixed-sized vectors and handles multiple layers of linguistic objects in a unified way.",
"ULR learning may offer a great convenience when confronted with sequences of different lengths, especially in tasks such as Natural Language Understanding (NLU) and Question Answering (QA), Corresponding author.",
"This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100), Key Projects of National Natural Science Foundation of China (U1836222 and 61733011), Huawei-SJTU long term AI project, Cutting-edge Machine Reading Comprehension and Language Model.",
"This work was supported by Huawei Noah's Ark Lab.",
"hence it is of great importance in both scientific research and industrial applications.",
"As is well known, embedding representation for a certain linguistic unit (i.e., word) enables linguistics-meaningful arithmetic calculation among different vectors, also known as word analogy (Mikolov et al., 2013).",
"For example: King Man = Queen Woman In fact, manipulating embeddings in the vector space reveals syntactic and semantic relations between the original symbol sequences and this feature is indeed useful in true applications.",
"For example, London is the capital of England can be formulized as: England + capital London Then given two documents one of which contains England and capital , the other contains London , we consider them relevant.",
"While a ULR model may generalize such good analogy features onto free text with all language levels involved together.",
"For example, Eat an onion : Vegetable :: Eat a pear : Fruit .",
"ULR has practical values in dialogue systems, by which human-computer communication will go far beyond executing instructions.",
"One of the main challenges of dialogue systems is Dialogue State Tracking (DST).",
"It can be formulated as a semantic parsing task (Cheng et al., 2020), namely, converting natural language utterances with any length into unified representations.",
"Thus this is essentially a problem that can be conveniently solved by mapping sequences with similar semantic meanings into similar representations in the same vector space according to a ULR model.",
"Another use of ULR is in the Frequently Asked Questions (FAQ) retrieval task, where the goal is to answer a user's question by retrieving question paraphrases that already have an answer from the database.",
"Such task can be accurately done by only manipulating vectors such as calculating and ranking vector distance (i.e., cosine similarity).",
"The core is to embed sequences of different lengths in the same vector space.",
"Then a ULR model retrieves the correct question-answer pair for the user query according to vector distance.",
"In this paper, we propose a universal language representation learning method that generates fixed-sized vectors for sequences of different lengths based on pre-trained language models (Devlin et al., 2019; Lan et al., 2019; Clark et al., 2020).",
"We first introduce an efficient approach to extract and prune meaningful n -grams from unlabeled corpus.",
"Then we present a new pre-training objective, Minimizing Symbol-vector Algorithmic Difference (MiSAD), that explicitly applies a penalty over different levels of linguistic units if their representations tend not to be in the same vector space.",
"To investigate our model's ability of capturing different levels of language information, we introduce an original universal analogy task derived from Google's word analogy dataset, where our model significantly improves the performance of previous pre-trained language models.",
"Evaluation on a wide range of downstream tasks also demonstrates the effectiveness of our ULR model.",
"Overall, our ULR-BERT reaches the highest average accuracy on the universal analogy dataset and obtains 1.1% gain over Google BERT on the GLUE benchmark.",
"Extensive experimental results on a question answering task verifies that our model can be easily applied to real-world applications in an extremely convenient way.",
"Previous language representation learning methods such as Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), LASER (Artetxe and Schwenk, 2019), InferSent (Conneau et al., 2017) and USE (Cer et al., 2018) focus on specific granular linguistic units, e.g., words or sentences.",
"Later proposed ELMo (Peters et al., 2018), OpenAI GPT (Radford et al., 2018), BERT (Devlin et al., 2019) and XLNet (Yang et al., 2020) learns contextualized representation for each input token.",
"Although such pre-trained language models (PrLMs) more or less are capable of offering universal language representation through their general-purpose training objectives, all the PrLMs devote into the contextualized representations from a generic text background and pay little attention on our concerned universal language presentation.",
"As a typical PrLM, BERT is trained on a large amount of unlabeled data including two training targets: Masked Language Model (MLM), and Next Sentence Prediction (NSP).",
"ALBERT (Lan et al., 2019) is trained with Sentence-Order Prediction (SOP) as a replacement of NSP.",
"StructBERT (Wang et al., 2020) combines NSP and SOP to learn inter-sentence structural information.",
"Nevertheless, RoBERTa (Liu et al., 2019) and SpanBERT (Joshi et al., 2020) show that single-sequence training is better than the sentence-pair scenario.",
"Besides, BERT-wwm (Cui et al., 2019), StructBERT (Joshi et al., 2020), SpanBERT (Wang et al., 2020) perform MLM on higher linguistic levels, augmenting the MLM objective by masking whole words, trigrams or spans, respectively.",
"ELECTRA (Clark et al., 2020) further improves pre-training through a generator and discriminator architecture.",
"The aforementioned models may seemingly handle different sized input sequences, but all of them focus on sentence-level specific representation still for each word, which may cause unsatisfactory performance in real-world situations.",
"There are a series of downstream NLP tasks especially on question answering which may be conveniently and effectively solved through ULR like solution.",
"Actually, though in different forms, these tasks more and more tend to be solved by our suggested ULR model, including dialogue utterance regularization (Cao et al., 2020), question paraphrasing (Bonadiman et al., 2019), measuring QA similarities in FAQ tasks (Damani et al., 2020; Sakata et al., 2019).",
"As pre-trained contextualized language models show their powerfulness in generic language representation for various downstream NLP tasks, we present a BERT-style ULR model that is especially designed to effectively learn universal, fixed-sized representations for input sequences of any granularity, i.e., words, phrases, and sentences.",
"Our proposed pre-training method is furthermore strengthened in three-fold.",
"First, we extract a large number of meaningful n -grams from monolingual corpus based on point-wise mutual information to leverage the multi-granular structural information.",
"Second, inspired by word and phrase representation and their compositionality, we introduce a novel pre-training objective that directly models the input sequences and the extracted n -grams through manipulating their representations.",
"Finally, we implement a normalized score for each n -gram to guide their sampling for training.",
"Given a symbol sentence, Joshi et al. (2020) utilize span-level information by randomly masking and predicting contiguous segments.",
"Different from such random sampling strategy, our method is based on point-wise mutual information (PMI) (Church and Hanks, 1989) that makes efficient use of statistics and automatically extracts meaningful n -grams from unlabeled corpus.",
"Mutual information (MI) describes the association between two tokens by comparing the probability of observing them together with the probabilities of observing them independently.",
"Higher mutual information indicates stronger association between the tokens.",
"To be specific, an n -gram is denoted as w = ( x 1 , . . . , x | w | ) , where | w | is the number of tokens in w and | w | > 1 .",
"Therefore, we present an extended PMI formula as follows: P MI ( w ) = 1 | w | log P ( w ) | w | (cid:88) k =1 log P ( x k ) where the probabilities are estimated by counting the number of observations of each token and n gram in the corpus, and normalizing by the corpus size.",
"1 | w | is an additional normalization fac-tor which avoids extremely low scores for long n -grams.",
"We first collect all n -grams with lengths up to N using the SRILM toolkit 1 (Stolcke, 2002), and compute PMI scores for all the n -grams based on their occurrences.",
"Then, only n -grams with PMI scores higher than the chosen threshold are selected and input sequences are marked with the corresponding n -grams.",
"While the MLM training objective as in BERT (De-vlin et al., 2019) and its extensions (Cui et al., 2019; Joshi et al., 2020; Wang et al., 2020) are widely used for pre-trained contextualized language modeling, they do not focus on our concerned ULR,",
"1 http://www.speech.sri.com/projects/srilm/download.html",
"which demands an arithmetic corresponding relationship between the symbol and its represented vector.",
"In order to directly model such demand, we propose a novel training target Minimizing Symbol-vector Algorithmic Difference (MiSAD) that leverages the vector space regularity of different granular linguistic units.",
"For example, the following symbol sequence equation London is + the capital of England = London is the capital of England (1) indicates a vector algorithmic equation according to our ULR goal, vector ( London is ) + vector ( the capital of England ) = vector ( London is the capital of England ) (2) Thus, if the symbol equation (1) cannot imply the respective vector equation (2), we may set a training objective to let the ULR model forcedly learn such relationship.",
"Formally, we denote the input sequence by S = { x 1 , . . . , x m } , where m is the number of tokens in S .",
"After n -gram extracting and pruning by means of PMI, each sequence is marked with several n -grams.",
"During pre-training, only one of them is selected by the n -gram scoring function, which will be introduced in detail in Section 3.3, and the input sequence is represented as S = { x 1 , . . . , x i 1 , w, x j +1 , . . . , x m } , where the n -gram w = { x i , . . . , x j } ( 1 i < j m ) is a sub-sequence of S .",
"Then we convert S into two independent parts the n -gram w and the rest of the tokens R = { x 1 , . . . , x i 1 , x j +1 , . . . , x m } which are fed into the model separately along with the original complete sequence.",
"The Transformer encoder generates a contextualized representation for each token in the sequence.",
"To derive fixed-sized vectors for sequences of different lengths, we use the pooled output of the [CLS] token as sequence embeddings.",
"The model is trained to minimize the following Mean Square Error (MSE) loss: L MiSAD = MSE ( E w + ER , ES ) where E w , ER and ES are representations of w , R and S , respectively, and are all normalized to unit lengths.",
"To enhance the robustness of the model, we jointly train MiSAD and the MLM objective LMLM as in BERT with equal weights.",
"Since the input sentence S is split into w + R , we must avoid masking out the n -gram w in the original sentence in order not to affect the semantics after vector space combination.",
"However, tokens in n -grams other than w have equal weights of being replaced with [MASK] as other tokens.",
"The final loss function is as follows: L = L MiSAD + LMLM 3.3 n -gram Sampling For a given sequence, the importance of different n -grams and the degree to which the model understands their semantics are different.",
"Instead of sampling n -grams at random, we let the model decide which n -gram to choose based on the knowledge learned in the pre-training stage.",
"Following Tamborrino et al. (2020), we employ a normalized score for each n -gram in the input sequence using the masked language modeling head.",
"We mask one n -gram at a time and the model outputs probabilities of the masked tokens given their surrounding context.",
"The score of an n -gram w is calculated as the average probabilities of all tokens in it.",
"where | w | is the length of w and S \\ w is the notation of an input sequence S with all tokens within w replaced by the special token [MASK] .",
"Finally, we choose the n -gram with the lowest score for our training target.",
"As for the pre-training corpus, we download the English Wikipedia Corpus 2 and pre-process with process wiki.py 3 , which extracts text from xml files.",
"When processing paragraphs from Wikipedia, we find that a large number of entities are annotated with special marks, which may be useful for our task.",
"Therefore, we identify all the entities and treat them as high-quality n -grams.",
"Then, we remove punctuation marks and characters 2 https://dumps.wikimedia.org/enwiki/latest 3 https://github.com/panyang/Wikipedia Word2vec/blob/ master/v1/process wiki.py in other languages based on regular expressions, and finally get a corpus of 2,266M words.",
"As for n -gram pruning, PMI scores of all n grams with a maximum length of N = 6 are calculated for each document.",
"We manually evaluate the extracted n -grams and find more than 50% of the top 2000 n -grams contain 2 3 words, and only less than 3% n -grams are longer than 4.",
"Although a larger n -gram vocabulary can cover longer n grams, it will cause too many meaningless n -grams at the same time.",
"Therefore, we empirically retain the top 3000 n -grams for each document.",
"Finally, we randomly sample 10M sentences from the entire corpus to reduce training time.",
"During pre-training, BERT packs sentence pairs into a single sequence and use the special [CLS] token as sentence-pair representation.",
"However, our MiSAD training objective requires single-sentence inputs.",
"Thus in our experiments, each input is an n -ngram or a single sequence with a maximum length of 128.",
"Special tokens [CLS] and [SEP] are added at the front and end of each input, respectively.",
"Instead of training from scratch, we initialize our model with the officially released checkpoints of BERT (Devlin et al., 2019), ALBERT (Lan et al., 2019) and ELECTRA (Clark et al., 2020).",
"We use Adam optimizer (Kingma and Ba, 2017) with initial learning rate of 5e-5 and linear warmup over the first 10% of the training steps.",
"Batch size is 64 and dropout rate is 0.1.",
"Each model is trained for one epoch over 10M training examples on four Nvidia Tesla P40 GPUs.",
"We construct a universal analogy dataset in terms of words, phrases and sentences and experiment with multiple representation models to examine their ability of representing different levels of linguistic units through a task-independent evaluation 4 .",
"Furthermore, we conduct experiments on a wide range of downstream tasks from the GLUE benchmark and a question answering task.",
"Our universal analogy dataset is based on Google's word analogy dataset and contains three levels of tasks: words, phrases and sentences.",
"Word-level Recall that in a word analogy task (Mikolov et al., 2013), two pairs of words that share the same type of relationship, denoted as A : B :: C : D , are involved.",
"The goal is to retrieve the last word from the vocabulary given the first three words.",
"To facilitate comparison between models with different vocabularies, we construct a closed-vocabulary analogy task based on Google's word analogy dataset through negative sampling.",
"Concretely, for each original question, we use GloVe to rank every word in the vocabulary and the top 5 results are considered to be candidate words.",
"If GloVe fails to retrieve the correct answer, we manually add it to make sure it is included in the candidates.",
"During evaluation, the model is expected to select the correct answer from 5 candidate words.",
"Table 1 shows examples from our word anlogy dataset.",
"Phrase-/Sentence-level To derive higher level analogy datasets, we put word pairs from the word-level dataset into contexts so that the resulting phrase and sentence pairs also have linear relationships.",
"Phrase and sentence templates are ex-trated from the English Wikipedia Corpus.",
"Both phrase and sentence datasets have four types of semantic analogy and three kinds of syntactic analogy.",
"Please refer to Appendix A for details about our approach of constructing the universal analogy dataset.",
"The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) is a collection of tasks that are widely used to evaluate the performance of a model in language understanding.",
"We divide NLU tasks from the GLUE benchmark into three main categories.",
"Single-Sentence Classification Single-sentence classification tasks includes SST-2 (Socher et al., 2013), a sentiment classification task, and CoLA (Warstadt et al., 2019), a task that is to determine whether a sentence is grammatically acceptable.",
"et al., 2009) and WNLI (Levesque et al., 2012).",
"However, we exclude the problematic WNLI in accordance with Devlin et al. (2019).",
"Semantic Similarity MRPC (Dolan and Brockett, 2005), QQP (Chen et al., 2018) and STS-B (Cer et al., 2017) are semantic similarity tasks, where the model is required to either determine whether the two sentences are equivalent or assign a similarity score for them.",
"In the fine-tuning stage, pairs of sentences are concatenated into a single sequence with a special token [SEP] in between.",
"For both single sentence and sentence pair tasks, the hidden state of the first token [CLS] is used for softmax classification.",
"We use the same sets of hyperparameters for all the evaluated models.",
"Experiments are ran with batch sizes in { 8, 16, 32, 64 } and learning rate of 3e-5 for 3 epochs.",
"GEOGRANNOGEOGRANNO (Herzig and Berant, 2019) contains natural language paraphrases paired with logical forms.",
"The dataset is manually annotated: For each natural language utterance, a correct canonical utterance paraphrase is selected.",
"The train/dev sets have 487 and 59 paraphrase pairs, respectively.",
"In our experiments, we focus on question paraphrase retrieval, whose task is to retrieve the correct paraphrase from all 158 different sentences when given a question.",
"Most of the queries have only one correct answer while some have two or more matches.",
"Evaluation metrics are Top-1/5/10 accuracy.",
"For GEOGRANNO and the universal analogy task, we apply three pooling strategies on top of the PrLM: Using the vector of the [CLS] token, mean-pooling of all token embeddings and max-pooling over time of all embeddings.",
"The default setting is mean-pooling.",
"On the universal analogy task, we adopt three types of baselines including bag-of-words (BoW) model from pre-trained word embeddings: GloVe (Pen-nington et al., 2014), sentence embedding models: InferSent (Conneau et al., 2017), GenSen (Subra-Model",
"manian et al., 2018), USE (Cer et al., 2018) and LASER (Artetxe and Schwenk, 2019), and pre-trained contextualized language models: BERT, ALBERT and ELECTRA.",
"On GLUE and GEOGRANNO , we especially evaluate our model and two baseline models: BERT The officially released pre-trained BERT models (Devlin et al., 2019).",
"MLM-BERT BERT models trained with the same additional steps with our model on Wikipedia using only the MLM objective.",
"ULR-BERT Our universal language representation model trained on Wikipedia with MLM and MiSAD.",
"Results on our universal analogy dataset are reported in Table 2.",
"Generally, semantic analogies are more challenging than the syntactic ones and higher-level relationships between sequences are more difficult to capture, which is observed in almost all the evaluated models.",
"On the word analogy task, GloVe achieves the highest accuracy (80.3%) while its performance drops sharply on higher-level tasks.",
"All well trained PrLMs like BERT, ALBERT 5 https://gluebenchmark.com and ELECTRA hardly exhibit arithmetic characteristics and increasing the model size usually leads to a decrease in accuracy.",
"However, training models with our properly designed MiSAD objective greatly improves the performance.",
"Especially, ULR-BERT obtains 15% 25% absolute gains on word-level analogy, such results are so strong to be comparable to GloVe, which especially focuses on the linear word analogy feature from its training scheme.",
"Meanwhile GloVe performs far worse than our model on higher-level analogies.",
"Overall, ULR-BERT achieves the highest average accuracy (45.8%), an absolute gain of 8.1% over BERT, indicating that it has indeed more effectively learned universal language representations across different linguistic units.",
"It demonstrates that our pre-training method is effective and can be adapted to different PrLMs.",
"Table 3 shows the performance on the GLUE benchmark.",
"Our model improves the BERTBASE and BERTLARGE by 1.1% and 0.7% on average, respectively.",
"Since our model is established on the released checkpoints of Google BERT, we make additional comparison with MLM-BERT that is trained under the same procedure as our model except for the pre-training objective.",
"While the model trained with more MLM updates may improve the performance on some tasks, it underper-Batch size: 8, 16, 32, 64; Length: 128; Epoch: 3; lr: 3e-5 Model Single Sentence Natural Language Inference Semantic Similarity Avg.",
"forms BERT on datasets such as MRPC, RTE and SST-2.",
"Our model exceeds MLM-BERTBASE and MLM-BERTLARGE by 0.9% and 0.7% on average respectively.",
"The main gains from the base model are in CoLA (+4.6%) and RTE (+1.4%), which are entirely contributed by our MiSAD training objective.",
"Overall, our model improves the performance of its baseline on every dataset in the GLUE benchmark, demonstrating its effectiveness in real applications of natural language understanding.",
"Table 4 shows the performance on GEOGRANNO .",
"As we can see, 4 out of 6 evaluated pre-trained language models significantly outperform BM25 for Top-1 accuracy, indicating the superiority of contextualized embedding-based models over the statistical method.",
"Among all the evaluated models, ULR-BERT yields the highest accuracies (39.7%/68.8%/77.3%).",
"To be specific, our ULR models exceeds BERTBASE and BERTLARGE by 10.1% and 19.2% and obtains 2.7% and 10.6% improvements compared with MLM-BERTBASE and MLM-BERTLARGE in terms of Top-1 accuracy, respectively, which are consistent with the results on the GLUE benchmark.",
"Since n -grams and sentences of different lengths are involved in the pre-training of our model, it is especially better at understanding the semantics of input sequences and mapping queries to their paraphrases according to the learned sense of semantic equality.",
"In this section, we explore to what extent does our model benefit from the MiSAD objective and sampling strategy, and further confirm that our pretraining procedure improves the model's ability of encoding variable-length sequences.",
"To make a fair comparison, we train BERT with the same additional updates using different combinations of training tasks:",
"NSP-BERT is trained with MLM and NSP, whose goal is to distinguish whether two input sentences are consecutive.",
"For each sentence, we choose its following sentence 50% of the time and randomly sample a sentence 50% of the time.",
"SOP-BERT is trained with MLM and SOP, a substitute of the NSP task that aims at better modeling the coherence between sentences.",
"Consistent with Lan et al. (2019), we sample two consecutive sentences in the same document as a positive Model Single Sentence Natural Language Inference Semantic Similarity Avg.",
"For both baselines and ULR, we use the same set of parameters for 5 runs, and average scores on the GLUE test set are reported in Table 5.",
"Although we expect NSP and SOP to help the model better understand the relationship between sentences and benefit tasks like natural language inference, they hardly improve the performance on GLUE according to our strict implementation.",
"Specifically, NSP-BERT outperforms MLM-BERT on datasets such as CoLA, QNLI and QQP while less satisfactory on other tasks.",
"SOP-BERT is on a par with MLM-BERT on three NLI tasks but it sharply decreases the score on other datasets.",
"In general, single-sentence training with only the MLM objective accounts for better performance as described by Liu et al. (2019); Joshi et al. (2020).",
"Besides, our training strategy which combines MLM and MiSAD yields the most considerable gains compared with other training objectives.",
"Table 6 shows standard deviation, mean and maximum performance on CoLA/RTE/MRPC dev set when fine-tuning BERT and ULR-BERT over 5 random seeds, which clearly shows that our model is generally more stable and yields better results compared with BERT.",
"We compare our PMI-based n -gram sampling scheme with two alternatives.",
"Specifically, we train the following two baseline models under the same model settings except for the sampling strategy.",
"Random Spans We replace our n -gram module with the masking strategy as proposed by Joshi et al. (2020), where the sampling probability of span length l is based on a geometric distribution l Geo ( p ) .",
"The parameter p is set to 0.2 and maximum span length l max = 6 .",
"Table 7 shows the effect of different sampling schemes on the GLUE dev set.",
"As we can see, our PMI-based n -gram sampling is preferable to other strategies on 6 out of 8 tasks.",
"CoLA and RTE are more sensible to sampling strategies than other tasks.",
"On average, using named entities and meaningful n -grams is better than randomly sampled spans.",
"We attribute the source to the reason is that random span sampling ignores important semantic and syntactic structure of a sequence, resulting in a large number of meaningless segments.",
"Compared with using only named entities, our PMI-based approach automatically discovers structures within any sequence and is not limited to any granularity, which is critical to pre-training universal language representation.",
"Experiments on the universal analogy task reveal that our proposed training scheme can be adapted to various pre-trained langauge models.",
"In this subsection, we compare our model with BERT, ALBERT and ELECTRA on GEOGRANNO and the GLUE benchmark.",
"Table 8 shows the results on GEOGRANNO and the GLUE dev set, where our approach can enhance the performance of all three pre-trained mod-Model Single Sentence Natural Language Inference Semantic Similarity Avg.",
"els.",
"Among all the evaluated models, ULR-BERT achieves the largest gains on GLUE while ULR-ELECTRA obtains the most significant improvement on GEOGRANNO .",
"It further verifies the effectiveness and universality of our model.",
"In previous experiments on GEOGRANNO , our model has shown considerable improvement over all three evaluated PrLMs.",
"The task involves text matching between linguistic units at different levels where queries are sentences and labels are often phrases.",
"Thus the performance on such task highly depends on the model's ability to uniformly deal with linguistic units of different granularities.",
"In the following, we explore deeper details and interpretability of how our proposed objective act at different levels of linguistic units.",
"Specifically, we intuitively show the consistency of the representations learned by ULR-BERT by grouping the dataset according to query length | q | and the absolute difference between query length and Question length abs ( | q | | Q | ) , respectively.",
"Results are shown in Table 9, which clearly shows that as the length of the query increases, the performance of BERT drops sharply.",
"Similarly, BERT is more sensible to the difference between query length and Question length.",
"In contrast, ULR-BERT is more stable when dealing with sequences of different lengths and is superior to BERT in terms of representation consistency, which we speculate is due to the interaction between different levels of linguistic units in the pretraining procedure.",
"This work formally introduces universal language representation learning to enable unified vector operations among different language hierarchies.",
"For such a purpose, we propose three highlighted ULR learning enhancement, including the newly designed training objective, Minimizing Symbol-vector Algorithmic Difference (MiSAD).",
"In detailed model implementation, we extend BERT's pre-training objective to a more general level, which leverages information from sequences of different lengths in a comprehensive way.",
"In addition, we provide a universal analogy dataset as a task-independent evaluation benchmark.",
"Overall experimental results show that our proposed ULR model is generally effective in a broad range of NLP tasks including natural language question answering and so on."
] | [
"abstain",
"abstain",
"objective",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"result",
"result",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective"
] |
[
"Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing.",
"In addition to yielding gains in predictive accuracy, attention weights are often claimed to confer interpretability , purportedly useful both for providing insights to practitioners and for explaining why a model makes its decisions to stakeholders.",
"We call the latter use of attention mechanisms into question by demonstrating a simple method for training models to produce deceptive attention masks.",
"Our method diminishes the total weight assigned to designated impermissible tokens, even when the models can be shown to nevertheless rely on these features to drive predictions.",
"Across multiple models and tasks, our approach manipulates attention weights while paying surprisingly little cost in accuracy.",
"Through a human study, we show that our manipulated attention-based explanations deceive people into thinking that predictions from a model biased against gender minorities do not rely on the gender.",
"Consequently, our results cast doubt on attention's reliability as a tool for auditing algorithms in the context of fairness and accountability.",
"1 1 Introduction Since their introduction as a method for aligning inputs and outputs in neural machine translation, attention mechanisms (Bahdanau et al., 2014) have emerged as effective components in various neural network architectures.",
"Attention works by aggregating a set of tokens via a weighted sum, where the attention weights are calculated as a function of both the input encodings and the state of the decoder.",
"Because attention mechanisms allocate weight among the encoded tokens, these coefficients are 1 The code and the datasets used in paper are available at https://github.com/danishpruthi/ deceptive-attention Attention Biography Label Original Ms. X practices medicine in Memphis, TN and ...",
"sometimes thought of intuitively as indicating which tokens the model focuses on when making a particular prediction.",
"Based on this loose intuition, attention weights are often claimed to explain a model's predictions.",
"For example, a recent survey on attention (Galassi et al., 2019) remarks: By inspecting the networks attention, ... one could attempt to investigate and understand the outcome of neural networks. Hence, weight visualization is now common practice.",
"In another work, De-Arteaga et al. (2019) study gender bias in machine learning models for occupation classification.",
"As machine learning is increasingly used in hiring processes for tasks including resume filtering, the potential for bias raises the spectre that automating this process could lead to social harms.",
"De-Arteaga et al. (2019) use attention over gender-revealing tokens (e.g., she', he', etc.) to verify the gender bias in occupation classification modelsstating that the attention weights indicate which tokens are most predictive.",
"Similar claims about attention's utility for interpreting models' predictions are common in the literature (Li et al., 2016; Xu et al., 2015; Choi et al., 2016; Xie et al., 2017; Martins and Astudillo, 2016; Lai and Tan, 2019).",
"In this paper, we question whether attention scores necessarily indicate features that influence a model's predictions.",
"Through a series of experiments on diverse classification and sequence-to-sequence tasks, we show that attention scores are surprisingly easy to manipulate.",
"We design a simple training scheme whereby the resulting models appear to assign little attention to a specified set of impermissible tokens while continuing to rely upon those features for prediction.",
"The ease with which attention can be manipulated without significantly affecting performance suggests that even if a vanilla model's attention weights conferred some insight (still an open and ill-defined question), these insights would rely on knowing the objective on which models were trained.",
"Our results present troublesome implications for proposed uses of attention in the context of fairness, accountability, and transparency.",
"For example, malicious practitioners asked to justify how their models work by pointing to attention weights could mislead regulators with this scheme.",
"For instance, looking at manipulated attention-based explanation in Table 1, one might (incorrectly) assume that the model does not rely on the gender prefix.",
"To quantitatively study the extent of such deception, we conduct studies where we ask human subjects if the biased occupation classification models (like the ones audited by De-Arteaga et al. (2019)) rely on gender related information.",
"We find that our manipulation scheme is able to deceive human annotators into believing that manipulated models do not take gender into account, whereas the models are heavily biased against gender minorities (see 5.2).",
"Lastly, practitioners often overlook the fact that attention is typically not applied over words but over final layer representations, which themselves capture information from neighboring words.",
"We investigate the mechanisms through which the manipulated models attain low attention values.",
"We note that",
"(i) recurrent connections allow information to flow easily to neighboring representations;",
"(ii) for cases where the flow is restricted, models tend to increase the magnitude of representations corresponding to impermissible tokens to offset the low attention scores; and",
"(iii) models additionally rely on several alternative mechanisms that vary across random seeds (see 5.3).",
"identify alternate adversarial attention weights after the model is trained that nevertheless produce the same predictions, and hence claim that attention is not explanation .",
"However, these attention weights are chosen from a large (infinite up to numerical precision) set of possible values and thus it is not surprising that multiple weights produce the same prediction.",
"Moreover since the model does not actually produce these weights, they would never be relied on as explanations in the first place.",
"Similarly, Serrano and Smith (2019) modify attention values of a trained model post-hoc by hard-setting the highest attention values to zero.",
"They find that the number of attention values that must be zeroed out to alter the model's prediction is often too large, and thus conclude that attention is not a suitable tool to for determining which elements should be attributed as responsible for an output.",
"In contrast to these two papers, we manipulate the attention via the learning procedure, producing models whose actual weights might deceive an auditor.",
"In parallel work to ours, Wiegreffe and Pinter (2019) examine the conditions under which attention can be considered a plausible explanation.",
"They design a similar experiment to ours where they train an adversarial model, whose attention distribution is maximally different from the one produced by the base model.",
"Here we look at a related but different question of how attention can be manipulated away from a set of impermissible tokens.",
"Using human studies we show that our training scheme leads to attention maps that are more deceptive , since people find them to be more believable explanations of the output (see 5.2).",
"We also extend our analysis to sequence-to-sequence tasks, and a broader set of models, including BERT, and identify mechanisms by which the manipulated models rely on the impermissible tokens despite assigning low attention to them.",
"Lastly, several papers deliberately train attention weights by introducing an additional source of supervision to improve predictive performance.",
"In some of these papers, the supervision comes from known word alignments for machine translation (Liu et al., 2016; Chen et al., 2016), or by aligning human eye-gaze with model's attention for sequence classification (Barrett et al., 2018).",
"Let S = w 1 , w 2 , . . . , w n denote an input sequence of n words.",
"We assume that for each task, we are Dataset (Task) Input Example Impermissible Tokens (Percentage) CommonCrawl Biographies (Physician vs Surgeon) Ms. X practices medicine in Memphis, TN and is affiliated with . . . Ms. X speaks English and Spanish.",
"given a pre-specified set of impermissible words I , for which we want to minimize the corresponding attention weights.",
"For example, these may include gender words such as he, she, Mr., or Ms..",
"We define the mask m to be a binary vector of size n , such that m i = (cid:40) 1 , if w i I 0 otherwise .",
"Further, let [0 , 1] n denote the attention assigned to each word in S by a model, such that (cid:80) i i = 1 .",
"For any task-specific loss function L , we define a new objective function L (cid:48) = L + R where R is an additive penalty term whose purpose is to penalize the model for allocating attention to impermissible words.",
"For a single attention layer, we define R as: R = log(1 T m ) and is a penalty coefficient that modulates the amount of attention assigned to impermissible tokens.",
"The argument of the log term ( 1 T m ) captures the total attention weight assigned to permissible words.",
"In contrast to our penalty term, Wiegreffe and Pinter (2019) use KL-divergence to maximally separate the attention distribution of the manipulated model ( new ) from the attention distribution of the given model ( old ): R (cid:48) = KL ( new (cid:107) old ) .",
"However, their penalty term is not directly applicable to our case: instantiating old to be uniform over impermissible tokens, and 0 over remainder tokens results in an undefined loss term.",
"When dealing with models that employ multiheaded attention, which use multiple different attention vectors at each layer of the model (Vaswani et al., 2017) we can optimize the mean value of our penalty as assessed over the set of attention heads H as follows: R = |H| (cid:88) h H log(1 Th m )) .",
"When a model has many attention heads, an auditor might not look at the mean attention assigned to certain words but instead look head by head to see if any among them assigns a large amount of attention to impermissible words.",
"Anticipating this, we also explore a variant of our approach for manipulating multi-headed attention where we penalize the maximum amount of attention paid to impermissible words (among all heads) as follows: R = min h H log(1 Th m ) .",
"We study the manipulability of attention on four binary classification problems, and four sequence-to-sequence tasks.",
"In each dataset, (in some, by design) a subset of input tokens are known a priori to be indispensable for achieving high accuracy.",
"Occupation classification We use the biographies collected by De-Arteaga et al. (2019) to study bias against gender-minorities in occupation classification models.",
"We carve out a binary classification task of distinguishing between surgeons and (non-surgeon) physicians from the multi-class occupation prediction setup.",
"We chose this subtask because the biographies of the two professions use similar words, and a majority of surgeons ( > 80% ) in the dataset are male.",
"We further downsample minority classesfemale surgeons, and male physiciansby a factor of ten, to encourage models to use gender related tokens.",
"Our models (described in detail later in 4.2) attain 96 .",
"4% accuracy on the task, and are reduced to 93 .",
"8% when the gender pronouns in the biographies are anonymized.",
"Thus, the models (trained on unanonymized data) make use of gender indicators to obtain a higher task performance.",
"Consequently, we consider gender indicators as impermissible tokens for this task.",
"Pronoun-based Gender Identification We construct a toy dataset from Wikipedia comprised of biographies, in which we automatically label biographies with a gender (female or male) based solely on the presence of gender pronouns .",
"To do so, we use a pre-specified list of gender pronouns.",
"Biographies containing no gender pronouns, or pronouns spanning both classes are discarded.",
"The rationale behind creating this dataset is that due to the manner in which the dataset was created, attaining 100% classification accuracy is trivial if the model uses information from the pronouns.",
"However, without the pronouns, it may not be possible to achieve perfect accuracy.",
"Our models trained on the same data with pronouns anonymized, achieve at best 72.6% accuracy.",
"Sentiment Analysis with Distractor Sentences We use the binary version of Stanford Sentiment Treebank (SST) (Socher et al., 2013), comprised of 10 , 564 movie reviews.",
"We append one randomly-selected distractor sentence to each review, from a set of opening sentences of Wikipedia pages.",
"2 Here, without relying upon the tokens in the SST sentences, a model should not be able to outperform random guessing.",
"Graduate School Reference Letters We obtain a dataset of recommendation letters written for the purpose of admission to graduate programs.",
"The task is to predict whether the student, for whom the letter was written, was accepted.",
"The letters include students' ranks and percentile scores as marked by their mentors, which admissions committee members rely on.",
"Indeed, we notice accu-2 Opening sentences tend to be declarative statements of fact and typically are sentiment-neutral.",
"racy improvements when using the rank and percentile features in addition to the reference letter.",
"Thus, we consider percentile and rank labels (which are appended at the end of the letter text) as impermissible tokens.",
"An example from each classification task is listed in Table 2. More details about the datasets are in the appendix.",
"Embedding + Attention For illustrative purposes, we start with a simple model with attention directly over word embeddings.",
"The word embeddings are aggregated by a weighted sum (where weights are the attention scores) to form a context vector, which is then fed to a linear layer, followed by a softmax to perform prediction.",
"For all our experiments, we use dot-product attention, where the query vector is a learnable weight vector.",
"In this model, prior to attention there is no interaction between the permissible and impermissible tokens.",
"The embedding dimension size is 128 .",
"BiLSTM + Attention The encoder is a single-layer bidirectional LSTM model (Graves and Schmidhuber, 2005) with attention, followed by a linear transformation and a softmax to perform classification.",
"The embedding and hidden dimension size are both set to 128 .",
"Transformer Models We use the Bidirectional Encoder Representations from Transformers (BERT) model (Devlin et al., 2019).",
"We use the base version consisting of 12 layers with self-attention.",
"Further, each of the self-attention layers consists of 12 attention heads.",
"The first token of every sequence is the special classification token [CLS] , whose final hidden state is used for classification tasks.",
"To block the information flow from permissible to impermissible tokens, we multiply attention weights at every layer with a self-attention mask M , a binary matrix of size n n where n is the size of the input sequence.",
"An element M i,j represents whether the token w i should attend on the token w j .",
"M i,j is 1 if both i and j belong to the same set (either the set of impermissible tokens, I or its complement I c ).",
"Additionally, the [CLS] token attends to all the tokens, but no token attends to [CLS] to prevent the information flow between I and I c (Figure 1 illustrates this setting).",
"We attempt to manipulate attention from [CLS] token to other tokens, and consider two variants: one where we manipulate the maxi-Figure 1: Restricted self-attention in BERT.",
"The information flow through attention is restricted between impermissible and permissible tokens for every encoder layer.",
"The arrows represent the direction of attention.",
"mum attention across all heads, and one where we manipulate the mean attention.",
"Previous studies analysing the interpretability of attention are all restricted to classification tasks (Jain et al., 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019).",
"Whereas, attention mechanism was first introduced for, and reportedly leads to significant gains in, sequence-to-sequence tasks.",
"Here, we analyse whether for such tasks attention can be manipulated away from its usual interpretation as an alignment between output and input tokens.",
"We begin with three synthetic sequence-to-sequence tasks that involve learning simple input-to-output mappings.",
"3 Bigram Flipping The task is to reverse the bi-grams in the input ( { w 1 , w 2 . . . w 2 n 1 , w 2 n } { w 2 , w 1 , . . . w 2 n , w 2 n 1 } ) .",
"The motivation for evaluating on the synthetic tasks is that for any given target token, we precisely know the input tokens responsible.",
"Thus, for these tasks, the gold alignments act as impermissible tokens in our setup (which are different for each output token).",
"For each of the three tasks, we programmatically generate 100 K random input training sequences (with their corresponding target sequences) of length upto 32 .",
"The input and output vocabulary is fixed to a 1000 unique tokens.",
"For the task of bigram flipping, the input lengths 3 These tasks have been previously used in the literature to assess the ability of RNNs to learn long-range reorderings and substitutions (Grefenstette et al., 2015).",
"Machine Translation (English to German) Besides synthetic tasks, we also evaluate on English to German translation.",
"We use the Multi30K dataset, comprising of image descriptions (Elliott et al., 2016).",
"Since the gold target to source word-level alignment is unavailable, we rely on the Fast Align toolkit (Dyer et al., 2013) to align target words to their source counterparts.",
"We use these aligned words as impermissible tokens.",
"For all sequence-to-sequence tasks, we use an encoder-decoder architecture.",
"Our encoder is a bidirectional GRU, and our decoder is a unidirectional GRU, with dot-product attention over source tokens, computed at each decoding timestep.",
"4 We also run ablation studies with",
"(i) no attention, i.e. just using the last (or the first) hidden state of the encoder; and",
"(ii) uniform attention, i.e. all the source tokens are uniformly weighted.",
"5 5 Results and Discussion In this section we examine how lowering attention affects task performance ( 5.1).",
"We then present experiments with human participants to quantify the deception with manipulated attention ( 5.2).",
"Lastly, we identify alternate workarounds through which models preserve task performance ( 5.3).",
"For the classification tasks , we experiment with the loss coefficient { 0 , 0 .",
"1 , 1 } .",
"In each experiment, we measure the",
"(i) attention mass: the sum of attention values over the set of impermissible tokens averaged over all the examples, and",
"(ii) test accuracy.",
"During the course of training (i.e. after each epoch), we arrive at different models from which we choose the one whose performance is within 2% of the original accuracy and provides the greatest reduction in attention mass on impermissible tokens.",
"This is done using the development set, and the results on the test set from the chosen model are presented in Table 3. Across most tasks, and models, we find that our manipulation scheme severely reduces the attention mass on 4 Implementation details: the encoder and decoder token embedding size is 256, the encoder and decoder hidden dimension size is 512, and the teacher forcing ratio is 0.5.",
"impermissible tokens compared to models without any manipulation (i.e. when = 0 ).",
"This reduction comes at a minor, or no, decrease in task accuracy.",
"Note that the models can not achieve performance similar to the original model (as they do), unless they rely on the set of impermissible tokens.",
"This can be seen from the gap between models that do not use impermissible tokens ( I (cid:55) ) from ones that do ( I (cid:51) ).",
"The only outlier to our findings is the SST+Wiki sentiment analysis task, where we observe that the manipulated Embedding and BiLSTM models reduce the attention mass but also lose accuracy.",
"We speculate that these models are under parameterized and thus jointly reducing attention mass and retaining original accuracy is harder.",
"The more expressive BERT obtains an accuracy of over 90% while reducing the maximum attention mass over the movie review from 96 .",
"2% to 10 3 % .",
"For sequence-to-sequence tasks , from Table 4, we observe that our manipulation scheme can similarly reduce attention mass over impermissible alignments while preserving original performance.",
"To measure performance, we use token-by-token accuracy for synthetic tasks, and BLEU score for English to German MT. We also notice that the models with manipulated attention (i.e. deliberately misaligned) outperform models with none or uniform attention .",
"This suggests that attention mechanisms add value to the learning process in sequence-to-sequence tasks which goes beyond their usual interpretation as alignments.",
"To study the deceptiveness of attention maps trained using various training schemes, we present a series of inputs and outputs from classification",
"models to three human subjects.",
"6 The models are BiLSTMs that are trained to classify occupations into either physician or surgeon given a short biography.",
"We highlight the input tokens as per the attention scores from three different training schemes:",
"(i) original dot-product attention,",
"(ii) adversarial attention from Wiegreffe and Pinter (2019), and,",
"(iii) our proposed attention manipulation strategy.",
"We ask human annotators (Q1): Do you think that this prediction was influenced by the gender of the individual?",
"Each participant answers either yes or no for a set of 50 examples from each of the three attention schemes.",
"We shuffled the order of sets among the three participants to prevent any ordering bias.",
"Additionally, participants can flip through many examples before registering their answers.",
"After looking at 50 examples from a given attention scheme, we inquire about trustworthiness of the attention scores (Q2): Do you believe the highlighted tokens capture the factors that drive the models' prediction?",
"They answer the question on a scale of 1 to 4 , where 1 denotes that the highlighted tokens do not determine the models' prediction, whereas 4 implies they significantly determine the models' prediction.",
"We deliberately ask participants once (towards the end) about the trustworthiness of attention-based explanations, in contrast to polling after each example, as it requires multiple examples to assess whether the explanations capture factors that are predictive.",
"Participants were kept unaware of the specifics of the classifier or the explanation technique used.",
"Detailed instructions presented to participants are available in the supplementary material.",
"Results We find that for the original dot-product attention, annotators labeled 66% of predictions to be influenced by gender.",
"Whereas for the other two attention schemes, none of the predictions were marked to be influenced by gender (see Table 5).",
"This is despite all three models achieving roughly the same high accuracy ( 96% ) which relies on gender information.",
"This demonstrates the efficacy of our manipulation schemepredictions from models biased against gender minorities are perceived (by human participants) as not being influenced by gender.",
"Further, our manipulated explanations receive a trustworthiness score of 2.67 6 The participating subjects are first and second year graduate students specializing in NLP/ML and are knowledgeable about attention mechanisms, but unaware about our work.",
"(out of 4), only slightly lower than the score for the original explanations, and significantly better than the adversarial attention.",
"We found that the KL divergence term in training adversarial attention (Eq. 1) encourages all the attention mass to concentrate on a single uninformative token for most examples, and hence was deemed as less trustworthy by the annotators (see Table 5, more examples in appendix).",
"By contrast, our manipulation scheme only reduces attention mass over problematic tokens, and retains attention over nonproblematic but predictive ones (e.g. medicine) making it more believable.",
"We assess agreement among annotators, and calculate the Fleiss' Kappa to be 0.97, suggesting almost perfect agreement.",
"We identify two mechanisms by which the models cheat , obtaining low attention values while remaining accurate.",
"Models with recurrent encoders can simply pass information across tokens through recurrent connections, prior to the application of attention.",
"To measure this effect, we hard-set the attention values corresponding to impermissible words to zero after the manipulated model is trained, thus clipping their direct contributions for inference.",
"For gender classification using the BiLSTM model, we are still able to predict over 99% of instances correctly, thus confirming a large degree of information flow to neighboring representations.",
"7 In contrast, the Embedding model (which has no means to pass the information pre-attention) at-7 A recent study (Brunner et al., 2019) similarly observes a high degree of mixing' of information across layers in Transformer models.",
"tains only about 50% test accuracy after zeroing the attention values for gender pronouns.",
"We see similar evidence of passing around information in sequence-to-sequence models, where certain manipulated attention maps are off by one or two positions from the gold alignments (see Figure 2).",
"Models restricted from passing information prior to the attention mechanism tend to increase the magnitude of the representations corresponding to impermissible words to compensate for the low attention values.",
"This effect is illustrated in Figure 3, where the L2 norm of embeddings for impermissible tokens increase considerably for the Embedding model during training.",
"We do not see increased embedding norms for the BiLSTM model, as this is unnecessary due to the model's capability to move around relevant information.",
"Figure 2, we present attention maps from the original model, alongside two manipulated models initialized with different seeds.",
"In some cases, the attention map is off by one or two positions from the gold alignments.",
"In other cases, all the attention is confined to the first hidden state.",
"In such cases, manipulated models are similar to a no-attention model, yet they offer better performance.",
"In preliminary experiments, we found a few such models that outperform the no-attention baseline, even when the attention is turned off during inference.",
"This suggests that attention offers benefits during training, even if it is not used during inference.",
"Amidst practices that perceive attention scores to be an indication of what the model focuses on , we characterize the manipulability of attention mechanism and the (surprisingly small) cost to be paid for it in accuracy.",
"Our simple training scheme produces models with significantly reduced attention mass over tokens known a priori to be useful for prediction, while continuing to use them.",
"Further analysis reveals how the manipulated models cheat , and raises concerns about the potential use of attention as a tool to audit models.",
"The authors thank Dr. Julian McAuley for providing, and painstakingly anonymizing the data for reference letters.",
"We also acknowledge Alankar Jain for carefully reading the manuscript and providing useful feedback.",
"ZL thanks Amazon AI, NVIDIA, Salesforce, Facebook AI, AbridgeAI, UPMC, the Center for Machine Learning in Health, the PwC Center, the AI Ethics and Governance Fund, and DARPA's Learning with Less Labels Initiative, for their support of ACMI Lab's research on robust and societally aligned machine learning."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"objective",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"There has been a growing interest in developing machine learning (ML) models for code summarization tasks, e.g., comment generation and method naming.",
"Despite substantial increase in the effectiveness of ML models, the evaluation methodologies, i.e., the way people split datasets into training, validation, and test sets, were not well studied.",
"Specifically, no prior work on code summarization considered the timestamps of code and comments during evaluation.",
"This may lead to evaluations that are inconsistent with the intended use cases.",
"In this paper, we introduce the time-segmented evaluation methodology, which is novel to the code summarization research community, and compare it with the mixed-project and cross-project methodologies that have been commonly used.",
"Each methodology can be mapped to some use cases, and the time-segmented methodology should be adopted in the evaluation of ML models for code summarization.",
"To assess the impact of methodologies, we collect a dataset of (code, comment) pairs with timestamps to train and evaluate several recent ML models for code summarization.",
"Our experiments show that different methodologies lead to conflicting evaluation results.",
"We invite the community to expand the set of methodologies used in evaluations.",
"Over the last several years, there has been a growing interest in applying machine learning (ML) models to code summarization tasks, such as comment generation (Iyer et al., 2016; Hu et al., 2018a; Wan et al., 2018; Liang and Zhu, 2018; Hu et al., 2018b; LeClair et al., 2019; Fernandes et al., 2019; Xu et al., 2019; LeClair and McMillan, 2019; LeClair et al., 2020; Hu et al., 2020; Ahmad et al., 2020; Cai et al., 2020; Gros et al., 2020) and method naming (Allamanis et al., 2016; Alon et al., 2019a,b; Fernandes et al., 2019; Nguyen et al., 2020).",
"Substantial progress has been reported over years, usually measured in terms of automatic metrics (Roy et al., 2021).",
"Despite a solid progress in generating more accurate summaries, the evaluation methodology , i.e., the way we obtain training, validation, and test sets, is solely based on conventional ML practices in natural language summarization, without taking into account the domain knowledge of software engineering and software evolution.",
"For example, temporal relations among samples in the dataset are important because the style of newer code summaries can be affected by older code summaries; however, they are not explicitly modeled in the evaluation of code summarization in prior work, which assumed the samples in the dataset are independent and identically distributed.",
"This gap could lead to inflated values for automatic metrics reported in papers and misunderstanding if a model might actually be useful once adopted.",
"The key missing piece in prior work is the description of the targeted use cases for their ML models.",
"Prior work has implicitly targeted only the batch-mode use case: applying the model to existing code regardless of when the code is written.",
"However, a more realistic scenario could be the continuous-mode use case: training the model with code available at a timestamp, and using the model on new code after that timestamp (as illustrated in Figure 1).",
"Considering that programming languages evolve and coding styles are constantly revised, results obtained in batch-mode could be very different from those obtained in continuous-mode.",
"Thus, it is insufficient to only report the task being targeted in a paper, and it is necessary to explain intended use cases for the ML models .",
"Once the task and use cases are clearly defined, an appropriate evaluation methodology (or potentially several methodologies) should be used.",
"In this paper, we study recent literature on ML models for code summarization.",
"By reasoning about their evaluation methodologies (which we 4936 Training Validation Test project 1 project 2 project 3 project n-1 project n ... time 2 1 Figure 1: Continuous-mode use case that can be evaluated with the proposed time-segmented methodology. call mixed-project and cross-project), we define two use cases that could be evaluated by these methodologies.",
"Next, we define a more practical use case when a developer uses a fixed model continuously over some period of time.",
"We describe an appropriate evaluation methodology for this use case: time-segmented.",
"Finally, we evaluate several existing ML models using the three methodologies.",
"We highlight two key findings.",
"First, depending on the employed methodology we end up with conflicting conclusions, i.e., using one methodology, model A is better than model B, and using another methodology, model B is better than model A. Second, our results show that the absolute values for automatic metrics vary widely across the three methodologies, which indicates that models might be useful only for some use cases but not others.",
"Thus, it is imperative that future work describes what use case is being targeted and use the appropriate evaluation methodology.",
"In summary, this paper argues that we need to more diligently choose evaluation methodology and report results of ML models.",
"Regardless of whether or not the conclusions of prior work hold across methodologies, we should always choose the methodology appropriate for the targeted task and use case.",
"We hope the community will join us in the effort to define the most realistic use cases and the evaluation methodology for each use case.",
"We hope that our work will inspire others to design and formalize use cases and methodologies for other tasks.",
"Only a few research studies on defect prediction (D'Ambros et al., 2012; Tan et al., 2015; Wang et al., 2016; Kamei et al., 2016), program repair (Lutellier et al., 2020), and bug localization (Pradel et al., 2020) took into consideration software evolution when evaluating ML models.",
"Taking software evolution into account in those tasks appears more natural, but is not more important than in code summarization.",
"Moreover, for the first time, we present an extensive list of potential use cases and evaluation methodologies side-by-Training Validation Test project 1 project 2 project 3 project n-1 project n ...",
"methodologies on the performance of ML models.",
"Our code and data are available at https: //github.com/EngineeringSoftware/ time-segmented-evaluation .",
"We first summarize two commonly used methodologies: mixed-project (2.1) and cross-project (2.2).",
"Then, we introduce a novel time-segmented methodology (2.3).",
"We will use 2 < 1 < to denote specific points in time (i.e., timestamps).",
"Table 1 lists prior work on developing new ML models for code summarization.",
"The last three columns show which methodology/method-ologies were used in the evaluation in each work (MP: mixed-project, CP: cross-project, T: time-segmented).",
"Out of 18 papers we found, 15 used the mixed-project methodology and 4 used the cross-project methodology.",
"No prior work used the time-segmented methodology.",
"The mixed-project methodology, which is the most commonly used methodology in prior work, extracts samples (code and comments) at a single timestamp ( ) from various projects, then randomly shuffles the samples and splits them into training, validation, and test sets.",
"Figure 2 illustrates this methodology, where each box represents a project and each circle represents a sample.",
"This methodology is time-unaware , i.e., it does not consider if samples in the test sets are committed into a project before or after samples in the training or validation sets.",
"The cross-project methodology, also commonly used in prior work, extracts samples at a single timestamp ( ) from various projects as well.",
"Unlike the mixed-project methodology, the cross-project methodology splits the set of projects into three disjoint sets for training, validation, and test.",
"Thus, the samples from one project are contained in only one of the training, validation, and test sets.",
"Figure 3 illustrates this methodology.",
"The cross-project methodology is explicitly evaluating the ability to generalize a model to new projects.",
"However, cross-project is also time-unaware, i.e., it does not consider if the samples from a project in the test set come before or after the samples from the projects in the training set.",
"We introduce a novel methodology: time-segmented .",
"Unlike the methodologies explained earlier, the time-segmented methodology is time-aware , i.e., the samples in the training set were available in the projects before the samples in the validation set, which were in turn available before the samples in the test set.",
"Figure 1 illustrates this methodology.",
"The samples available before 2 (i.e., their timestamps are earlier than 2 ) are assigned to the training set.",
"The samples available after 2 and before 1 are assigned to the validation set.",
"And finally, the samples available after 1 and before (which is the time when the dataset is collected) are assigned to the test set.",
"This assignment may not be the only approach to satisfy the definition of the time-segmented methodology, but is one approach that utilizes all samples collected at .",
"Alternative assignments, e.g., excluding samples available before 3 (a timestamp earlier than 2 ) from the training set, may have other benefits, which we leave for future work to study.",
"Methodologies are used to set up experiments and obtain an appropriate dataset split for the evaluation.",
"However, they do not describe the envisioned usage of an ML model.",
"Prior work picked a methodology in order to set up experiments, but we argue that ML models should be described with respect to use cases , i.e., how will the developers use the models eventually.",
"Once a use case is chosen, an appropriate methodology can be selected to evaluate the model.",
"In this section, we define three use cases via examples of the comment generation task.",
"The first two use cases are extracted from prior work.",
"Namely, we reason about the mixed-project and the cross-project methodologies used in prior work and try to link each to a (somewhat) realistic use case.",
"The third use case is inspired by our own development and can be evaluated using the time-4938 segmented methodology.",
"Note that we do not try to provide an exhaustive list of use cases, but rather to start off this important discussion on the distinction between a use case and an evaluation methodology.",
"For the simplicity of our discussion, we only focus on the training and test sets (since the validation set can be regarded as the open test set for tuning).",
"Consider Alice, a developer at a large software company.",
"Alice has been developing several software features in her project over an extended period of time (since 1 ), but she only wrote comments for a part of her code.",
"At one point ( ), she decides it is time to add documentations for the methods without comments, with the help of an ML model.",
"Alice decides to train a model using already existing samples (i.e., (code, comment) pairs for the methods with comments) in her code, and since this may provide only a small number of training samples, she also uses the samples (available at time ) from other projects.",
"We call this in-project batch-mode use case , because Alice trains a new model every time she wants to use the model, and she applies it to a large amount of methods that may be added before or after the methods in the training set.",
"This use case can be evaluated using the mixed-project methodology (2.1).",
"Because prior work using the mixed-project methodology did not set any limit on the timestamps for samples in training and test sets, the time difference between samples in the two sets can be arbitrarily large.",
"Moreover, the model is applied on all projects that it has been trained on.",
"These two facts make the in-project batch-mode use case less realistic, for example, a sample from project A available at time may be used to predict a sample from project B available at time 1 , and a sample from project B available at time may be used to predict a sample from project A available at time 1 , simultaneously.",
"In this case, we assume that Alice works on a project (since 1 ) without writing any documentation for her code.",
"At some point ( ), Alice decides to document all her methods, again with the help of an ML model.",
"Since Alice does not have any comments in her code, she decides to only train on the samples (i.e., (code, comment) pairs) from other projects (at time ).",
"Once the model is trained, she uses it to generate comments for all the methods in her project.",
"We call this cross-project batch-mode use case , because Alice trains a new model at a specific timestamp and applies it to all the methods on a new project.",
"(Note that once she integrates the comments that she likes, she can use them in the future for training a new ML model, which matches in-project batch-mode use case, or potentially she could decide to ignore those comments and always generates new comments, but this is unlikely.)",
"This use case can be evaluated using the cross-project methodology (2.2).",
"While the cross-project methodology is reasonable for evaluating model generalizability, the cross-project batch-mode use case does make strong assumptions (e.g., no documentation exists for any method in the targeted projects).",
"In this case, Alice writes comments for each method around the same time as the method itself.",
"For example, Alice might integrate a model for comment generation into her IDE that would suggest comments once Alice indicates that a method is complete.",
"(Updating and maintaining comments as code evolves (Panthaplackel et al., 2020; Liu et al., 2020; Lin et al., 2021) is an important topic, but orthogonal to our",
"work.) Suppose at 1 , Alice downloads the latest model trained on the data available in her project and other projects before 1 ; such model could be trained by her company and retrained every once in a while (finding an appropriate frequency at which to retrain the model is a topic worth exploring in the future).",
"She can keep using the same model until when she decides to use a new model.",
"We call this continuous-mode , because the only samples that can be used to train the model are the samples from the past.",
"This use case can be evaluated using the time-segmented methodology (2.3).",
"We describe the steps to apply the methodologies following their definitions (2) with a given dataset, as illustrated in Figure",
"4. The input dataset contains samples with timestamps, and the outputs include: a training and validation set for each methodology to train models; a standard test set for each methodology to evaluate the models for this methodology only; and a common test set for each pair of methodologies to compare the same models on the two methodologies.",
"Appendix A presents the formulas 4939 2 1 1. time-segment samples in each project E 2 ,p E 1 \\ 2 ,p E \\ 1 ,p 2 1 2. perform in-project split training ( r x ) validation ( r y ) test ( r z ) E 2 ,p train E 2 ,p val E 2 ,p test E 1 \\ 2 ,p train E 1 \\ 2 ,p val E 1 \\ 2 ,p test E \\ 1 ,p train E \\ 1 ,p val E \\ 1 ,p test p 1 p 2 p m p n 2 1 r x r y r z P train ( r x ) P val ( r y ) P test ( r z ) 3. perform cross-project split MP CP T",
"Step 1: time-segment .",
"See Figure 4 top left part.",
"A project is horizontally segmented into three parts by timestamps 2 and 1 .",
"Step 2: in-project split .",
"See Figure 4 top right part.",
"A project is further vertically segmented into three parts randomly, which is orthogonal to the time segments in step",
"1. Step 3: cross-project split .",
"See Figure 4 middle part.",
"Projects are assigned to training, validation, and test sets randomly, which is orthogonal to the time segments and in-project splits in step 1 and",
"2. Step 4: grouping .",
"Now that the dataset is broken down to small segments across three dimensions (time, in-project, and cross-project), this step groups the appropriate segments to obtain the training (Train), validation (Val), and standard test (TestS) sets for each methodology.",
"This is visualized in Figure 4 bottom left part.",
"Step 5: intersection .",
"The common test (TestC) set of two methodologies is the intersection of their TestS sets.",
"This is visualized in Figure 4 bottom right part.",
"methodologies on the intersection of the three TestS sets, but in practice, this set is too small (far less than 4% of all samples when we assign 20% projects and 20% samples in each project into test set).",
"Step 6: postprocessing .",
"To avoid being impacted by the differences in the number of training samples for different methodologies, we (randomly) downsample their Train sets to the same size (i.e., the size of the smallest Train set).",
"1 The evaluation (Val, TestS, TestC) sets may contain samples that are duplicates of some samples in the Train set, due to code clones (Sajnani et al., 2016; Roy et al., 2009) and software evolution (Fluri et al., 2007; Zaidman et al., 2011).",
"We remove those samples as they induce noise to the evaluation of ML models (Allamanis, 2019).",
"We present the results of removing exact-duplicates in the main paper, but we also perform experiments of removing near-duplicates to further reduce this noise and report their results in Appendix B (which do not affect our main findings).",
"1 This is not required if training ML models under a specific methodology without comparing to other methodologies.",
"We run several existing ML models using different methodologies to understand their impact on automatic metrics, which are commonly used to judge the performance of models.",
"We focus on two most studied code summarization tasks: comment generation and method naming.",
"We gave our best to select well-studied, representative, publicly-available models for each task; adding more models may reveal other interesting observations but is computationally costly, which we leave for future work.",
"Comment generation .",
"Developers frequently write comments in natural language together with their code to describe APIs, deliver messages to users, and to communicate among themselves (Pa-dioleau et al., 2009; Nie et al., 2018; Pascarella et al., 2019).",
"Maintaining comments is tedious and error-prone, and incorrect or outdated comments could lead to bugs (Tan et al., 2007, 2012; Ratol and Robillard, 2017; Panthaplackel et al., 2021).",
"Comment generation tries to automatically generate comments from code.",
"Prior work mostly focused on generating an API comment (e.g., JavaDoc summary) given a method.",
"We used three models: DeepComHybrid model from Hu et al. (2018a, 2020), Transformer model and Seq2Seq baseline from Ahmad et al. (2020).",
"We used four automatic metrics that are frequently reported in prior work: BLEU (Pap-ineni et al., 2002) (average sentence-level BLEU-4 with smoothing (Lin and Och, 2004b)), METEOR (Banerjee and Lavie, 2005), ROUGE-L (Lin and Och, 2004a), and EM (exact match accuracy).",
"Method naming .",
"Descriptive names for code elements (variables, methods, classes, etc.) are a vital part of readable and maintainable code (Hst and stvold, 2009; Allamanis et al., 2015).",
"Naming methods is particularly important and challenging, because the names need to be both conciseusually containing only a few tokens and comprehensiblesuch that they describe the key functionality of the code (Lawrie et al., 2006).",
"We used two models: Code2Vec from Alon et al. (2019b) and Code2Seq from Alon et al. (2019a).",
"We used four automatic metrics that are frequently reported in prior work: Precision, Recall, F1, and EM (exact match accuracy).",
"We could not easily reuse existing datasets from prior work because the timestamps of samples are not available.",
"We extracted samples with timestamps from popular and active open-source Java projects using English for summaries (comments and names) from GitHub.",
"We collected samples before = 2021 Jan 1 st , and we time-segmented samples by 2 = 2019 Jan 1 st and 1 = 2020 Jan 1 st .",
"The splitting ratios for in-project and cross-project splits are 70%, 10%, 20%.",
"Table 2 presents the number of samples in each set for each methodology.",
"We present more details and metrics of data collection in Appendix C. 5.3 Results We use the hyper-parameters provided in the original papers.",
"Validation sets are used for early-stopping if needed by the model.",
"We run each model three times with different random seeds.",
"Appendix D presents more details of our experiments to support their reproducibility.",
"Tables 3 and 4 present the results for comment generation and method naming, respectively.",
"Each table has four parts and each part contains the results for one metric.",
"Each number is the metric of a model (name at column 1) trained on the Train set of a methodology (name at row 1) and evaluated on a TestC set involving that methodology (name at row 2).",
"The best results are in bold text.",
"The results marked with the same Greek letter are not statistically significantly different.",
"2 Appendix E presents the results on Val and TestS sets, and bar plots visualizing the results.",
"Depending on the methodology, one model can perform better or worse than another.",
"On 2 We conducted statistical significance tests using bootstrap tests (Berg-Kirkpatrick et al., 2012) with confidence level 95%.",
"method naming task, we found that Code2Seq outperforms Code2Vec only in cross-project methodology but not the other methodologies, consistently on all metrics.",
"Our observation aligns with the finding in the original paper (Alon et al., 2019a) that Code2Seq outperforms Code2Vec when using the cross-project methodology.",
"The reason is that in contrary to Code2Seq which generates a name as a sequence of subtokens, Code2Vec generates a name by retrieving a name in the Train set, and thus has better chances to generate correct names under the mixed-project and time-segmented methodologies where the names in the Test set are similar to the names in the Train set.",
"This finding suggests that a model may work better for one use case but not anotherin this case, Code2Seq performs better in the cross-project batch-mode use case, but Code2Vec performs better in the in-project batch-mode and the continu-ous-mode use case.",
"Depending on the methodology, the differences between models may or may not be observable.",
"For example, for comment generation, on the TestC set of cross-project and time-segmented methodologies when using the METEOR metric Train MP CP MP T CP T Test MP CP MP T CP T Precision [%] Code2Vec 59.3 18.9 65.1 57.8 14.4 55.3 Code2Seq 52.6 39.8 52.7 49.2 35.5 46.2 Recall [%] Code2Vec 57.7 16.4 63.5 55.8 12.9 53.8 Code2Seq 44.0 30.3 44.5 40.3 26.5 38.4 F1 [%] Code2Vec 57.9 16.7 63.7 56.2 13.0 53.9 Code2Seq 46.5 33.0 46.9 42.9 28.8 40.6 EM [%] Code2Vec 42.7 6.5 50.5 46.9 5.4 43.9 Code2Seq 17.6 7.6 18.9 16.0 5.9 13.3 Table 4: Method naming models' results on and TestC sets.",
"(Table 3, columns 67), Transformer significantly outperforms Seq2Seq when trained on the time-segmented Train set, but does not when trained on the cross-project Train set.",
"Similar observations can be made on the BLEU and EM metrics for comment generation, and the EM metric for method naming.",
"Two models' results being not statistically significantly different indicates that their difference is not reliable.",
"We could not find reference points for this finding in prior work (unfortunately, Ahmad et al. (2020) did not compare Seq2Seq with Transformer though both were provided in their replication package).",
"Results under the mixed-project methodology are inflated.",
"We found that the results under the mixed-project methodology are always higher than the other two methodologies.",
"This is not surprising as ML models have difficulty in generalizing to samples that are different from the Train set.",
"Considering that the mixed-project methodology represents a less realistic use case than the other two methodologies, the mixed-project methodology always over-estimates the models' usefulness.",
"As such, we suggest that the mixed-project methodology should never be used unless the model is targeted specially for the in-project batch-mode use case (3).",
"may be an under-estimation of the more realistic continuous-mode use case.",
"We found that the results under the cross-project methodology are always lower than the results under the time-segmented methodology, consistently on all metrics in both tasks.",
"We have discussed that the con-tinuous-mode use case is more realistic than others (3).",
"This suggests that the usefulness of the models in prior work using the cross-project methodology may have been under-estimated.",
"Findings in prior work may not hold when using a different methodology or a different dataset.",
"We found that the findings reported by prior work may not hold in our experiment: for example, the finding DeepComHybrid outperforms Seq2Seq from Hu et al. (2020) does not hold on our dataset (one reason could be the Seq2Seq code we used is more recent than the version that DeepComHybrid based on).",
"This indicates that researchers should specify the targeted use case, the employed methodology, and the used dataset when reporting findings, and expect that the findings may not generalize to a different use case or dataset.",
"We studied the impact of different evaluation methodologies in the context of code summarization, and future work can study their impacts on other software engineering (SE) areas using ML models.",
"We briefly discuss the potential ways and challenges of transferring our methodologies from code summarization to ML models for other SE tasks, including generation tasks (e.g., commit message generation and code synthesis) and non-generation tasks (e.g., defect prediction and bug localization).",
"The key is to modify the application steps of the methodologies based on the format of samples (inputs and outputs) in the targeted task.",
"For most tasks where inputs and outputs are software-related artifacts with timestamps, the methodologies, use cases, and application steps defined by us should still apply.",
"For example, transferring our methodologies from the code summarization task to the commit message generation task only requires replacing (code, comment) pairs to (code change, commit message) pairs.",
"For some tasks, the input or output of one sample may change when observed at different timestamps.",
"For example, in defect prediction (pointed out by Tan et al. (2015)), suppose a commit at 2 was discovered to be buggy at , then when training the model at 1 , that commit should be labeled as not buggy.",
"The correct version of the sample should be used according to its timestamp.",
"Out of many other use cases and methodologies, we discuss two that are closely related to the con-tinuous-mode use case and the time-segmented methodology.",
"Future work can expand our study and perform experiments on them.",
"Cross-project continuous-mode use case .",
"Compared to the continuous-mode use case, when training the model at , instead of using all projects' samples before , we only use other projects' samples.",
"The corresponding methodology is a combination of the cross-project and time-segmented methodologies.",
"From the ML model users' perspective, this use case is less realistic than the contin-uous-mode use case, because using samples from the targeted projects can improve the model's performance.",
"However, from ML model researchers' perspective, this methodology may be used to better evaluate the model's effectiveness on unseen samples (while considering software evolution).",
"Online continuous-mode use case .",
"Compared to the continuous-mode use case, when we train a new model at , instead of discarding the previous model trained at 1 and training from scratch, we continue training the previous model using the samples between 1 and , e.g., using online learning algorithms (Shalev-Shwartz, 2012).",
"The corresponding methodology is similar to the time-segmented methodology, but with multiple training and evaluation steps.",
"Compared to the time-segmented methodology, the model trained using this methodology may have better performance as it is continuously tuned on the latest samples (e.g., with the latest language features).",
"We provide generic definitions to several representative use cases (in-project batch-mode, cross-project batch-mode, and continuous-mode).",
"We believe these three use cases, plus some variants of the continuous-mode use case (6.2), should cover most use cases of ML models in the SE industry.",
"In practice, it may not always be possible to determine the target use cases in advance of deploying ML models, in which case performing a set of experiments (similar to the one in our study) to compare 4943 between different methodologies and use cases can guide the switching of use cases.",
"We leave studying the usages of ML models in the SE industry and deploying the findings of our study as techniques to benefit the SE industry as future work.",
"To our best knowledge, ours is the first work to study the evaluation methodologies of code summarization ML models and use the time-segmented methodology in this area.",
"Outside of the code summarization area, a couple of work on defect prediction (D'Ambros et al., 2012; Tan et al., 2015; Wang et al., 2016; Kamei et al., 2016), one work on program repair (Lutellier et al., 2020), and one work on bug localization (Pradel et al., 2020) have taken into account the timestamps during evaluation, specifically for their task.",
"The methodologies we proposed in this paper may also be extended to those areas.",
"Moreover, our work is the first to study the impact of the mixed-project, cross-project, and time-segmented methodologies side-by-side.",
"Tu et al. (2018) revealed the data leakage problem when using issue tracking data caused by the unawareness of the evolution of issue attributes.",
"We revealed that a similar problem (unawareness of the timestamps of samples in the dataset) exists in the evaluation of code summarization tasks, and we propose a time-segmented methodology that can be used in future research.",
"Bender et al. (2021) pointed out a similar issue in NLP, that the ML models evaluated in standard cross-validation methodology may incur significant bias in realistic use cases, as the models cannot adapt to the new norms, language, and ways of communicating produced by social movements.",
"Code summarization studies the problem of summarizing a code snippet into a natural language sentence or phrase.",
"The two most studied tasks in code summarization are comment generation and method naming (5.1).",
"Table 1 already listed the prior work on these two tasks.",
"Here, we briefly discuss their history.",
"The first work for comment generation (Iyer et al., 2016) and method naming (Allamanis et al., 2016) were developed based on encoder-decoder neural networks and attention mechanism.",
"Other prior work extended this basic framework in many directions: by incorporating tree-like code context such as AST (Wan et al., 2018; Xu et al., 2019; LeClair et al., 2019; Hu et al., 2018a, 2020); by incorporating graph-like code context such as call graphs and data flow graphs (Xu et al., 2018; Fernandes et al., 2019; Yonai et al., 2019; LeClair et al., 2020); by incorporating path-like code context such as paths in AST (Alon et al., 2019b,a); by incorporating environment context, e.g., class name when generating method names (Nguyen et al., 2020); by incorporating type information (Cai et al., 2020); or by using more advanced neural architecture such as transformers (Ahmad et al., 2020).",
"Recently, pre-trained models for code learning (Feng et al., 2020; Guo et al., 2021; Ahmad et al., 2021; Wang et al., 2021; Chen et al., 2021) were built on large datasets using general tasks (e.g., masked language modeling), and these models can be fine-tuned on specific code learning tasks, including comment generation and method naming.",
"Evaluating pre-trained models involves a pretraining set, in addition to the regular training, validation, and test sets.",
"Our methodologies can be extended for pre-trained models; for example, in the time-segmented methodology, the pre-training set contains samples that are available before the samples in all other sets.",
"No prior work on pre-trained models has considered the timestamps of samples during evaluation.",
"We highlighted the importance of specifying targeted use cases and adopting the correct evaluation methodologies during the development of ML models for code summarization tasks (and for other software engineering tasks).",
"We revealed the importance of the realistic continuous-mode use case, and introduced the time-segmented methodology which is novel to code summarization.",
"Our experiments of comparing ML models using the time-segmented methodology and using the mixed-project and cross-project methodologies (which are prevalent in the literature) showed that the choice of methodology impacts the results and findings of the evaluation.",
"We found that mixed-project tends to over-estimate the effectiveness of ML models, while the cross-project may under-estimate it.",
"We hope that future work on ML models for software engineering will dedicate extra space to document intended use cases and report findings using various methodologies.",
"We thank Nader Al Awar, Kush Jain, Yu Liu, Darko Marinov, Sheena Panthaplackel, August Shi, Zhiqiang Zang, and the anonymous reviewers for their comments and feedback.",
"This work is partially supported by a Google Faculty Research Award, the US National Science Foundation under Grant Nos.",
"CCF-1652517, IIS-1850153, and IIS-2107524, and the University of Texas at Austin Continuing Fellowship.",
"Our dataset has been collected in a manner that is consistent with the licenses provided from the",
"sources (i.e., GitHub repositories).",
"The evaluation methodologies described in our study is expected to assist researchers in evaluating and reporting ML models for code summarization, and assist software developers (i.e., users of those models) in understanding the reported metrics and choosing the correct model that fits their use case.",
"Our work can be directly deployed in code summarization research, and can potentially generalize to other software engineering areas using ML models (6.1).",
"We expect our work to help researchers build ML models for code summarization (and other SE areas) that are more applicable to their intended use cases.",
"We do not claim that the methodologies and use cases described in our study are the most realistic ones, nor do we try to provide an exhaustive list of them.",
"In particular, the continuous-mode use case (3.3) is inspired by our own observations during using and developing ML models for code summarization.",
"We try our best to design this use case to reflect the most common and realistic scenarios, but other use cases may be more valid in certain scenarios (6.2).",
"We conducted experiments involving computation time/power, but we have carefully chosen the number of times to repeat the experiments to both ensure reproducibility of our research and avoid consuming excessive energy.",
"We provided details of our computing platform and running time in Appendix D. References Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"method",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"result",
"objective",
"abstain",
"result",
"method",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"objective",
"other",
"objective",
"objective",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"result",
"objective",
"result",
"result",
"method",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method"
] |
[
"Document grounded generation is the task of using the information provided in a document to improve text generation.",
"This work focuses on two different document grounded generation tasks: Wikipedia Update Generation task and Dialogue response generation .",
"Our work introduces two novel adaptations of large scale pre-trained encoder-decoder models focusing on building context driven representation of the document and enabling specific attention to the information in the document.",
"Additionally, we provide a stronger BART baseline for these tasks.",
"Our proposed techniques outperform existing methods on both automated (at least 48 % increase in BLEU-4 points) and human evaluation for closeness to reference and relevance to the document.",
"Furthermore, we perform comprehensive manual inspection of the generated output and categorize errors to provide insights into future directions in modeling these tasks.",
"Natural language generation (NLG) systems are increasingly expected to be naturalistic, contentful, and situation-aware due to their popularity and pervasiveness in human life (Reiter and Dale, 2000; Mitchell et al., 2014).",
"This is particularly relevant in dialogue systems (Zhang et al., 2018a; Niu and Bansal, 2018), machine translation systems (Mirkin and Meunier, 2015; Rabinovich et al., 2017), story generation (Fan et al., 2018; Yao et al., 2019), and question answering systems (Gatius, 2017; Reddy et al., 2019).",
"Despite these mainstream applications, NLG systems face the challenges of being bland, devoid of content, generating generic outputs and hallucinating information (Wiseman et al., 2017; Li et al., 2016; Holtzman et al., 2020; Welleck et al., 2020).",
"Grounding the generation in different modalities Work done during internship at Salesforce.",
"like images (Huang et al., 2016; Mostafazadeh et al., 2017; Shuster et al., 2018), videos (Palaskar et al., 2019; Regneri et al., 2013), and structured data (Banik et al., 2013; Gardent et al., 2017) alleviates some of these issues.",
"Generating natural language from schematized or structured data such as database records, slot-value pair, and Wikipedia Infobox has been explored in prior work (Mei et al., 2016; Wen et al., 2015; Lebret et al., 2016).",
"Although useful, these tasks encounter difficulties such as general applicability (databases may not be available for all domains) and are constrained by the available resources (size of the database).",
"Document grounded generation mitigates these applicability issues by exploiting the vast availability of data in unstructured form (e.g. books, encyclopedias, news articles, and Wikipedia arti-cles).",
"This enhances the applicability of document grounded generation to a wide range of domains with limited (or no) availability of structured data.",
"Hence, recent work has focused on defining new tasks and carving the scope of the problems (Liu et al., 2018; Prabhumoye et al., 2019; Faltings et al., 2020; Zhou et al., 2018; Dinan et al., 2018).",
"We focus on two different document grounded generation tasks: (1) Wikipedia Update Generation task (Prabhumoye et al., 2019) and (2) Dialogue response generation (Zhou et al., 2018; Dinan et al., 2018).",
"Prior work has studied these two tasks independently and focused on task specific modeling techniques (Zhao et al., 2020a,b; Prabhumoye et al., 2019).",
"Our work unifies these tasks and formally shows the similarity in them: presence of a context and a document to ground the information in the generation process.",
"Our work introduces two novel improvements to the architectures of large scale pre-trained models (Lewis et al., 2019; Raffel et al., 2019): (1) we focus on building context driven representation of the document, where the context is taken into account while building the representation of the Figure 1: Document Grounded Generation: An example of a conversation that is grounded in the given document (text in green shows information from the document that was used to generate the response).",
"document, and (2) during generation we provide specific attention to the information in the document.",
"We provide a stronger BART-based (Lewis et al., 2019) baseline for these tasks.",
"This work shows that pre-trained models albeit good at text generation, can be further improved by providing grounding specific improvements.",
"Our main contributions are the two new proposed techniques for the document grounded generation tasks (3.2 and 3.3).",
"We also provide a new baseline which is stronger than the previous state-of-the-art methods (Zhao et al., 2020b; Prabhumoye et al., 2019) for the two tasks.",
"We formally show how the two independent tasks studied in this paper are identical and similar modeling techniques can be used to solve them (3).",
"Automated and human evaluation results on three different datasets demonstrate substantial improvements (5.1 and 5.2).",
"Specifically, we achieve an improvement of 19 .",
"7 BLEU-4 points compared to Zhao et al. (2020b) on the dialogue generation task.",
"Additionally, significant gains are observed in BLEU-4 compared to BART-based baseline.",
"A comprehensive manual analysis of the generated output is presented in this work which paves way for future work (6).",
"We will release our code on Github.",
"Our task is to generate text given a context and a source of content (document).",
"Additionally, the generated text should coherently fit the context and contain information from the document.",
"We focus on content present in unstructured form in documents to ground text generation.",
"Figure 1 illustrates such an example.",
"Dialogue response generation is traditionally conditioned on the dialogue context (Vinyals and Le, 2015; Li et al., 2016).",
"As Figure 1 demonstrates, the generative model is conditioned on both the document as well as the dialogue context.",
"Note that the context and document play different roles in impacting the generation the context sets the background while the document provides the content necessary to generate the text.",
"Formally, each sample i of our task is defined as a tuple ( d i , c i , x i ) containing context c i , document d i and text x i to be generated.",
"Note that each d i can be a single document or a set of documents.",
"The task is to generate x i such that it coherently follows c i and contains information from d i .",
"The task can be modeled as the following conditional text generation model: p ( x i | c i , d i ) , where is a set of model parameters.",
"Figure 1 illustrates that the generator has to account for two inputs the dialogue context c i (shown in blue) and the document d i (shown in red) to generate the response x i grounded in d i (text shown in green).",
"If the generative model was only conditioned on dialogue context, then it could produce generic responses like Do you think they did the right thing? or Yes, I agree. or hallucinate information like Yes, and the Times published it on the front page. .",
"These which would be appropriate to the given context but are devoid of content or contain wrong information.",
"Document grounded models are capable of responding with interesting facts like Yes, but it was dangerous for the white house to ban the post from the white house. 3 Methodology A natural way to model p ( x i | c i , d i ) is to train an encoder-decoder model using cross-entropy loss log p with respect to the ground-truth output text.",
"We discuss two ways of building effective representations for encoder-decoder models to focus on d i : (1) combine encoder representations of c i and d i , (2) include an additional attention multihead at each layer of the transformer to specifically focus on the content in d i .",
"Low-Res: Zhao et al. (2020a) introduce the state-of-the-art model for document grounded dialogue generation.",
"As described in (2), the chat history serves as the context c i and x i is the response to be generated.",
"Zhao et al. (2020a) pre-train their architecture on the dialogue specific Reddit (Dziri et al., 2018) dataset and learn separate parameters for encoding c i and d i .",
"Zhao et al. (2020a) further has three componentscontext processor, knowledge processor and the language model, each of which build distributions over the vocabulary space.",
"A decoding manager is then trained to generate a token based on these three distributions.",
"Instead, we employ the recent success of the pre-trained encoder-decoder models (Lewis et al., 2019; Raffel et al., 2019) by using BART (Lewis et al., 2019).",
"One key component of solving this task is to build a representation of the content in the document/s d i that is not present in the context c i .",
"We want to leverage the SelfAttention feature of transformers (Vaswani et al., 2017) to build such a representation.",
"Since, we use a pre-trained language model as our baseline architecture, we don't use a separate language model component.",
"Instead, we direct our efforts to focus on effectively combining c i and d i .",
"Content Transfer: Prabhumoye et al. (2019) provide benchmark numbers for the Wikipedia Update Generation task (2).",
"They explore multiple generative as well as extractive models with and without context.",
"We use their best performing Context Informed LSTM-based encoder-decoder model as baseline.",
"This model concatenates the tokens of the context c i and the document d i and passes the concatenated sequence to the encoder.",
"BART: The most straightforward way of using BART for modeling p ( x i | c i , d i ) is to concatenate the tokens of the context c i and the document d i and pass the concatenated sequence ( [ c i ; d i ] ) to the BART encoder, and then the decoder generates x i .",
"This is our BART baseline; it already has the advantage of the highly contextualized representations of c i and d i in comparison with Zhao et al. (2020a).",
"However, fully relying on the self-attention mechanism over the concatenated text would lack the explicit distinction between c i and d i .",
"Below, we describe two techniques to efficiently build document focused representations.",
"In Figure 1, the method which adds an additional CrossAttention multi-head sub-layer to each layer of the transformer is shown.",
"This attention multi-head specifically focuses on the document d i .",
"We propose to use two encoder representations for c i and d i .",
"We first define h d = Encoder ([ c i ; d i ]) to get a contextualized representation of d i , conditioning on the context c i .",
"h d is equivalent to the representation used in the BART baseline.",
"We then apply the same BART encoder to the context alone: h c = Encoder ( c i ) .",
"We finally concatenate the encoder outputs h = [ h c ; h d ] before passing them to the BART decoder.",
"This h is Co ntext D riven R epresentation ( CoDR ).",
"This method does not require any model architectural modification, and instead the encoder and decoder are fined-tuned to use the multiple input representations.",
"In this section, we describe Do cument H eaded Attention ( DoHA ) to further enhance the use of the multiple input representations.",
"A decoder in transformer encoder-decoder models (Vaswani et al., 2017) has two types of multi-head attention mechanism, SelfAttention and CrossAttention with the source sequence.",
"SelfAttention module allows each position in the decoder to attend to all positions in the decoder up to and including that position.",
"CrossAttention module performs multi-head attention over the output of the encoder stack and attends over the source sequence.",
"While our CoDR method uses the two different source representations, h c and h d , CrosstAttention is still shared over the concatenated representation h .",
"In this work, we add an additional multi-head attention CrossAttention_Doc to specifically attend over the tokens of the document, while the original CrossAttention (named as CrosstAttention_Cxt ), only attends over the tokens of the context.",
"Each of the multi-heads are of the form: MultiHead ( Q, K, V ) = [ H 1 ; . . . ; H m ] W o , H j = Attention ( Q W Qj , K W Kj , V WV j ) .",
"The multi-head function receives three inputs a query Q , key K and value V .",
"W o is an output projection of the concatenated outputs of the attention heads.",
"Each H j is the output of a single attention head and W Qj , W Kj and W Vj are head-specific projections for Q , K , and V , respectively.",
"Hence, the multi-head CrossAttention_Doc is defined by: CrossAttention _ Doc ( Q, K, V ) = [ H 1 ; . . . ; H m ] W do , H j = Attention ( Q W dQj , K W dKj , V W dVj ) , where W do , W dQj , W dKj and W dVj are parameters trained specifically to focus on document.",
"The parameters of CrossAttention_Doc are initialized with those of CrossAttention_Cxt .",
"Each decoder layer follows the following sequence of functions: h = F ( SelfAttention ( h x , h x , h x )) , h = F ( CrossAttention _ Cxt ( h , h c , h c )) , h = F ( CrossAttention _ Doc ( h , h d , h d )) , h = F ( FFN ( h )) , where F ( h ) is a sequence of LayerNorm ( residual + dropout ( h )) , followed by residual = h .",
"We integrate the additional attention head CrossAtten-tion_Doc by passing the output of the previous attention head CrossAttention_Cxt as query.",
"Unlike the weighted attention fusion techniques (Cao et al., 2020), this technique of fusing the additional attention head is novel and useful as it does not require any additional parameters for the fusion.",
"Document grounded generation can leverage unstructured data as a source of grounding and can hence be applied to a variety of generation tasks such as dialogue responses, Wikipedia articles, reports and legal argument.",
"This work focuses on Wikipedia Update Generation and Dialogue Response Generation which have been studied independently in prior work.",
"We discuss the similarities in these two tasks and design a common modeling technique for them.",
"This task involves generating an update for Wikipedia context given a news article (Prabhu-moye et al., 2019).",
"The dataset was collected by parsing Wikipedia articles and Common Crawl for news articles.",
"It consists tuples of the form ( d i , c i , x i ), where the grounding document d i is the news article which contains information for the reference update x i .",
"x i is written by a Wikipedia editor as an update to the Wikipedia context c i .",
"The goal of the task is to generate x i given the context c i and the document d i .",
"Goal oriented dialogues have been traditionally grounded in structured sources like slot-value pairs and databases (Wei et al., 2018; Rastogi et al., 2020).",
"Open domain dialogue generation on the other hand faces the issue of hallucinating information (Ghazvininejad et al., 2018).",
"Hence we study open domain dialogue generation which is grounded in documents as a source of information.",
"CMU_DoG: The CMU Document Grounded Conversations dataset consists of human-human conversations collected over Amazon Mechanical Turk (Zhou et al., 2018).",
"The conversations are grounded in a document provided to the crowd-workers and focuses only on movies.",
"The dataset uses Wikipedia descriptions of movies for grounding the conversations.",
"The dataset consists tuples of the form ( d i , c i , x i ), where d i is a section (or passage) extracted from Wikipedia, c i is dialogue history (or context) and x i is the reference response.",
"The response x i is grounded in d i and coherently follows the conversation c i .",
"Wizard of Wikipedia: This dataset also consists of human-human conversations collected over Amazon Mechanical Turk and are grounded in passages extracted from Wikipedia (Dinan et al., 2018).",
"These conversations are grounded in a diverse range of topics (totally 1365) which are further split into seen and unseen topics during training and validation.",
"At each step of the dialogue the wizard has access to a set of passages of knowledge which may be relevant to the given dialogue context.",
"The dataset is created by retrieving the top 7 articles (first paragraph only) that are most relevant to the last two turns of dialogue (by wizard and ap-prentice).",
"Hence, the dataset consists tuples of the form ( d i , c i , x i ), where d i is a list of 7 passages relevant to the conversation, c i is dialogue history (or context) and x i is the reference response.",
"The above three tasks consists tuples of the form ( d i , c i , x i ), where x i coherently follows c i and is grounded in d i .",
"Hence, we can use common modeling techniques (3) for these tasks.",
"1 1 Data statistics are shown in Appendix (A) Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 Rouge-L Meteor F1 Wikipedia Update Generation Content Transfer (Prabhumoye et al., 2019) 10.18 4.42 2.20 1.23 10.08 6.21 12.6 BART (baseline) 21.72 14.71 11.28 9.20 22.39 12.90 27.5 CoDR 25.15 17.33 13.56 11.31 23.48 14.38 29.0 DoHA 25.11 17.04 13.17 10.86 23.49 14.28 29.1 CMU_DoG Low-Res (Zhao et al., 2020a) 15.00 5.70 2.50 1.20 -10.7 BART (baseline) 23.78 19.27 17.66 16.91 19.30 12.59 21.7 CoDR 26.86 22.75 21.30 20.68 20.41 14.47 22.7 DoHA 27.33 23.05 21.55 20.90 20.44 14.55 22.8 Wizard of Wikipedia (Seen) Low-Res (Zhao et al., 2020a) 21.80 11.50 7.50 5.50 -18.0 BART (baseline) 23.92 14.62 10.24 7.75 21.41 15.45 31.1 CoDR 24.00 14.98 10.64 8.18 21.82 15.71 31.8 DoHA 24.14 15.08 10.68 8.18 21.76 15.89 31.8 Wizard of Wikipedia (Unseen) Low-Res (Zhao et al., 2020a) 20.70 10.10 6.20 4.30 -16.5 BART (baseline) 21.88 12.54 8.44 6.23 19.14 14.03 28.2 CoDR 21.84 12.74 8.60 6.35 19.50 14.22 29.0 DoHA 22.31 13.04 8.89 6.60 19.62 14.47 29.0 Table 1: Results on the automated metrics for the three datasets 5 Experiments and Results We implement all our models with the transformers tool (Wolf et al., 2019), and the details are in A.",
"Following prior work (Prabhumoye et al., 2019; Zhao et al., 2020a), we evaluate our system-generated sentences against the reference sentences on Rouge-L (Lin, 2004), BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2011) metrics.",
"2 Rouge-L measures the longest common subsequence between the generated sentence and the reference, capturing both lexical selection and word order.",
"METEOR also uses synonyms and stemmed forms of the words in candidate and reference sentences, and thus may be better at quantifying semantic similarities.",
"Additionally, we present F1 which indicates the unigram overlap between the generated output and the reference sentence.",
"3 Table 1 shows that the BART baseline outperforms previous state-of-the-art models (Zhao et al., 2020a; Prabhumoye et al., 2019) on all three tasks.",
"It demonstrates that both our improvements DoHA and CoDR perform better than our BART baseline on all metrics and for all three tasks.",
"Notably, we see an improvement of 19 .",
"7 BLEU-4 points 2 We use NLG evaluation toolkit (Sharma et al., 2017) from https://github.com/Maluuba/nlg-eval 3 We use the code published at https://github.c om/facebookresearch/ParlAI/blob/master/parlai/core/metrics.py to calculate unigram F1.",
"on the CMU_DoG dataset compared to Zhao et al. (2020a) which was pre-trained on dialogue specific data; and an improvement on 8 .",
"9 BLEU-4 points on the Wikipedia Update Generation compared to (Prabhumoye et al., 2019).",
"4 We also see substantial improvements ( 23 . 6 % increase in BLEU-4 for CMU_DoG) compared to the simple BART baseline for the three tasks.",
"In general, DoHA performs slightly better than CoDR on the three tasks.",
"We follow the human evaluation guidelines mentioned in (Prabhumoye et al., 2019) and evaluate the system generated sentences on three dimensions: (1) closeness of the generated sentences to the references, (2) relevance of the generated sentences to the context and document, and (3) fluency of the generated sentences.",
"Closeness: The automatic metrics like BLEU, METEOR, and Rouge-L may not be tolerant towards linguistic variations in generated outputs.",
"Hence, we perform a human evaluation to measures how accurately the generated sentence reflects the information in the reference.",
"The annotators are provided with the reference sentence and the generated outputs of two systems labeled A and B in a randomized order.",
"The annotators were instructed to Pick the option which is closest in meaning with the reference option. The annotators could 4 We use NLG eval script for (Prabhumoye et al., 2019) Task BART v CoDR BART v DoHA DoHA v CoDR BART NoPref CoDR BART NoPref DoHA DoHA NoPref CoDR Wikipedia Update Generation Closeness 33.3 36.7 30.0 25.5 46.7 27.8 32.2 42.2 25.6 Relevance 18.9 54.4 26.7 24.4 45.6 30.0 33.3 38.9 27.8 CMU_DoG Closeness 15.6 58.8 25.6 30.0 42.2 27.8 33.3 44.5 22.2 Relevance 22.2 43.4 34.4 23.3 42.3 34.4 34.4 42.3 23.3 Wizard of Wikipedia (seen) Closeness 36.7 40.0 23.3 28.9 31.1 40.0 40.5 31.7 27.8 Relevance 24.2 51.6 24.2 32.2 35.6 32.2 28.9 46.7 24.4 Wizard of Wikipedia (unseen) Closeness 23.3 47.8 28.9 44.4 20.0 35.6 21.1 63.3 15.6 Relevance 27.8 47.8 24.4 30.0 43.3 26.6 23.3 41.1 35.6 Table 2: Human evaluation results depicting percentage of times a model was picked (NoPref=No Preference) select system A or B , or indicate that neither was preferred by picking the third option C .",
"This is a simple evaluation task though potentially biased toward the sole reference.",
"Relevance: The reference sentence may not be the only correct sentence that fits the context.",
"This is especially true in dialogue generation tasks where contexts like How are you? and What was your favourite part of the movie? can have many correct responses that can be produced by grounding on the same document.",
"Hence, we measures whether the generated output contained salient information from the document written in a manner appropriate to the context.",
"The annotators are provided with the document d i , the context c i , and the outputs of the two systems A and B , again in a random order.",
"They were instructed to Pick the option which contains information from the document and fits the dialogue context coherently .",
"Note that the annotators don't have access to the reference in this evaluation.",
"Each judge had to consider whether the information fits with the context and also whether system-generated content could be supported by the document.",
"Fluency: Finally, we evaluate the fluency of the generated sentences on a scale of 1 (unreadable) to 4 (perfect) as is described in (Zhou et al., 2018).",
"Human evaluation was conducted on Amazon Mechanical Turk.",
"We conduct 3 comparative studies between the BART, CoDR and DoHA outputs.",
"Each worker was asked to annotated 10 pairs of sentences.",
"We added one control pair among them i.e for 1 / 10 pairs, both the sentences were exactly the same.",
"If a worker provides wrong judgement for the control pair then their annotations were discarded.",
"For each dataset we have total 540 comparative judgements and 90 sentences of each of the models marked for fluency.",
"Table 2 shows the results of the human evaluation on closeness and relevance.",
"The closeness results show that all the three models BART, CoDR and DoHA generate sentences that are close to the reference, although CoDR and DoHA outperform BART in most cases.",
"Interestingly, the relevance results for Wikipedia Update Generation and CMU_DoG datasets show that CoDR and DoHA generate content that is grounded in the document as opposed to BART.",
"BART baseline generates sentences that are fluent and close to the reference but does not ground in the content of the document as compared to CoDR and DoHA.",
"The No Preference' is generally opted over any of the models which is further discussed in 6.",
"For the relevance comparison, annotators have to read a large document to figure out if the generated information is present in the document or not.",
"This can make the annotations noisy especially for Wizard of Wikipedia dataset which has 7 passages as grounding document.",
"Since both CoDR and DoHA are also BART-based models, the fluency for all three of them is very high and close to each other (BART= 3 . 64 , CoDR= 3 . 71 , DoHA= 3 . 66 ).",
"CoDR and DoHA: The DoHA model still uses the content driven representations ( h d and h c ).",
"The main difference is that in CoDR model we concatenate h d and h c and pass it to the decoder but for DoHA we pass h d and h c separately to the decoder.",
"DoHA has an additional attention layer to focus on the representation of the document h d Error Class % Chat context Reference Generation Reference and generation are grounded 35 the story is sounding even more interesting.",
"only.",
"In this loose sense, DoHA is CoDR plus additional parameters in attention layer to focus on h d .",
"DoHA performs marginally better than CoDR in automated metrics.",
"But qualitatively (human evaluation) DoHA produces higher quality outputs as compared to CoDR.",
"Table 2 shows DoHA performing better than CoDR on all but one case.",
"We manually inspect the outputs of the CoDR model on the development set of CMU_DoG and Wikipedia Update Generation dataset to understand the their quality.",
"We inspect 60 samples in each dataset which have Rouge-L score < 60 .",
"These are chosen such that we have 10 samples in each of the 6 buckets of Rouge-L score (buckets are range of 10 points: 0-9, 10-19, 20-29, 30-39, 40-49 and 50-59).",
"We analyse the generated outputs along the two aspects of appropriateness of the generation to the context and its grounding in the document.",
"CMU_DoG: We find that 52 / 60 ( 86 . 7 %) responses were appropriate to the given chat context.",
"These 52 responses are further categorized in Table",
"3. We found that for about 90 % of samples, if the reference is grounded then the generation is also grounded and if the reference is not grounded then the generation is not grounded.",
"Further inspection shows that references are not grounded if they are follow up questions, opinions or experiences that are shared in the conversation.",
"In most of these cases, the context dictates if the response should be grounded or not grounded in the document.",
"Since, all of the generated responses in this category are appropriate to the context suggests that these conversational subtleties are not captured by automated evaluation metrics and are given a low score.",
"We also observe a few data artifacts like the mapping of the Wikipedia sections and the chat context is noisy for this dataset.",
"This can be easily resolved by providing all the previous passages of the conversation as grounding to the model.",
"We would also like to note that this dataset was collected under two scenarios: (1) both the people in the conversation have access to the document, and (2) only one person has access to the document.",
"But this distinction is not made in modeling the task.",
"The noise in the dataset can be reduced by modeling only the users that have access to the document in the conversation (similar to Wizard of Wikipedia where only the wizard is modeled).",
"Wikipedia Update Generation: The error analysis for this task is shown in Table",
"4. For 5 % cases, the reference itself is not grounded in the document.",
"The remaining 95 % cases are further classified into 4 error categories.",
"About 85 % times, the generation is either completely or partially grounded if the reference is grounded.",
"43 % generations are grounded in document but are linguistic variations of the reference or could be alternate updates to the context.",
"Yet, these are scored low on the Rouge-L metric revealing the inadequacy of the automated metrics.",
"For 23 % cases the generation partially hallucinates some information or misses some information present in the reference.",
"22 % times the Error Class % Reference Generation R Linguistic Variation: Reference and generation are grounded and generation is appropriate but a linguistic variation of the reference or an alternate appropriate update.",
"reference itself does not seem to coherently fit the context.",
"This is primarily observed for Wikipedia pages that are in the form of a list like 1340s and Timeline of DC Comics (1950s) .",
"Yet, for 50 % of the Incoherent Reference cases, the generation is grounded in the document and very close to the reference (like the example in Table 4).",
"Only for 7 % of the cases, the generation is completely incorrect and hallucinates all of the information.",
"Future work can focus on improving the error in the Incorrect and Partial Hallucination error classes.",
"Reference Comparison: With the insights from manual inspection, we performed another comparative study with human judges (on Amazon Mechanical Turk).",
"This was to understand how our models perform in comparison with the reference.",
"The judges are instructed to Pick the option that is most appropriate to the given context .",
"We annotated 100 samples for each DoHA and CoDR model in comparison with the reference on the CMU_DoG and Wikipedia Update Generation datasets.",
"We perform two separate comparative experiments: Reference vs CoDR and Reference vs DoHA.",
"The results in Table 5 show consolidated results for the two models.",
"It shows the total number of times reference was selected, the total number of times No Pref' was selected or the total number of CoDR or DoHA was selected.",
"It demonstrates that our models produce appropriate outputs which can be used as alternate responses/updates.",
"Our models are preferred over the reference in both the tasks suggesting that the automated evaluation is insufficient and the sole reference should not be considered as the only correct response to the context.",
"Generation grounded in document has been studied through a large body of summarization work (Rush et al., 2015; Nallapati et al., 2016) and similar tasks such as headline generation (Tan et al., 2017).",
"Multiple new works have extended this research in new directions; Wikipedia Update Generation (Prabhu-moye et al., 2019) introduces the task of generating an update to the Wikipedia context based on a news document; Wikipedia article generation (Liu et al., 2018) introduces the task of generating an entire Wikipedia article based on multiple documents; Text Editing by Command (Faltings et al., 2020) introduces the task of generating a particular type of Wikipedia edit conditioned on a command provided in natural language and a grounding consisting of snippets of 200 web page results.",
"Parallely, new tasks have also emerged focusing on document grounding for dialogue response generation (Zhou et al., 2018; Dinan et al., 2018).",
"Zhao et al. (2020a) explore this task in low-resource set-Dataset Ref NoPref DoHA/CoDR Wikipedia 33.9 28.3 37.8 CMU_DoG 22.8 45.6 31.6 Table 5: Comparison with reference (Ref) in %age ting and use pre-training along with a disentangled decoder.",
"The disentangled decoder consists of a context processor, knowledge processor and a language model.",
"A dialogue manager is used to combine the vocabulary distributions provided by these three components.",
"Zhao et al. (2020b) propose a knowledge selection module integrated with pre-trained language models for this task.",
"Cao et al. (2020) use pre-trained language model GPT-2 (Radford et al.) and explore various attention fusion techniques for persona-based dialogue generation (Zhang et al., 2018b; Dinan et al., 2020).",
"Our DoHA technique also introduces an additional attention multi-head but does not use any additional weights to fuse attention heads.",
"Similarly, Junczys-Dowmunt and Grundkiewicz (2018) use an additional attention multi-head in transformer architecture for automatic post-editing task.",
"We demonstrate how attention can be enhanced in pre-trained models.",
"The CoDR model fuses the representations of the document and the context in the decoder which is inspired by the fusion-in-decoder model in open-domain QA (Izacard and Grave, 2020).",
"Although Bruyn et al. (2020) introduce the usage of BART for knowledge grounded dialogues, it is primarily from the perspective of improving knowledge retrieval.",
"We provide benchmark BART numbers (Table 1) for the generation task.",
"Prabhumoye et al. (2020) provide a schema containing five modules which can be changed to control the generation process.",
"While Zhao et al. (2020a) modify the external input and the output module, we focus on the external input and the generator module of the pre-trained language model.",
"This paper proposes two novel improvements for document grounded generation and provides a stronger baseline.",
"This paper demonstrates how similar modeling techniques could be used for two previously separately modeled tasks.",
"Our proposed models outperform the previous techniques and the new stronger baseline on automated metrics and human evaluation for the three datasets discussed in the paper.",
"We present a comprehensive manual inspection which reveal certain data artifacts and provides us with insight on how to model these tasks in future.",
"Particularly, future work can focus on designing better evaluation metrics which don't penalize linguistic variations in generation.",
"Better models can also be constructed to focus on cases of partial hallucination or incorrect responses.",
"The intended use of the models proposed is to aid the NLG systems in generating content-rich text.",
"Note that this does not imply that the models generate factually correct text.",
"The generation entirely depends on the information in the document provided.",
"If the document itself is factually incorrect then the generation would be grounded in false content and hence generate inaccurate text.",
"We hope that this technology is used for socially positive applications like building trust of users in dialogue systems like Alexa, Siri and Google Home by providing users with credible information.",
"This work has specifically focused on two datasets of dialogue response generation with the aim that this research not only helps in generating responses which contain useful information but also increase credibility of responses by disclosing the source of information.",
"If dialogue systems base their responses on certain sources of information then they can potentially disclose the source of information to the user.",
"The user then has the agency to make informed decision about trusting the system responses or not.",
"Additional generations are shown in Appendix (B).",
"Table 8 and 9 in Appendix B show the potential misuses of models trained on this task.",
"For both the experiments, a few news articles were hand selected and relevant context was selected from a chosen Wikipedia article.",
"In case of Table 9, the context was curated by hand.",
"Interestingly, the tables also shows the sensitivity of the trained model to the document information.",
"It consists of the same context but different documents were provided as inputs to the model.",
"The generated outputs are different for each document.",
"This work was supported in part by ONR Grant N000141812861 and NSF IIS1763562.",
"We are grateful to Semih Yavuz and Caiming Xiong for valuable discussions at earlier stages of this work.",
"We would like to thank Srinath Reddy Meadusani for his technical support throughout the project."
] | [
"abstain",
"method",
"objective",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other",
"method",
"method",
"result",
"objective",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"other",
"method",
"other",
"method",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Prior studies have found that women self-promote less than men due to gender stereotypes.",
"In this study we built a BERT-based NLP model to predict whether a Congressional tweet shows self-promotion or not and then used this model to examine whether a gender gap in self-promotion exists among Congressional tweets.",
"After analyzing 2 million Congressional tweets from July 2017 to March 2021, controlling for a number of factors that include political party, chamber, age, number of terms in Congress, number of daily tweets, and number of followers, we found that women in Congress actually perform more self-promotion on Twitter, indicating a reversal of traditional gender norms where women self-promote less than men.",
"Self-promotion is the act of presenting oneself as competent (Jones and Pittman, 1982).",
"It is an important impression management technique in professional communication.",
"Prior studies have found that self-promotion, when combined with other impression management techniques such as ingratiation for likeability, resulted in better interview evaluations (Proost et al., 2010).",
"However, self-promotion was also found to be a risk factor for womenthose who self-promoted may have encountered backlash for violating gender stereotypes (Rudman, 1998).",
"This risk was more pronounced in traditionally male-dominated professions such as politicians.",
"Women politicians were faced with the dilemma that, while their job required them to self-promote, doing so may have risked losing likeability and hurt election chances (Okimoto and Brescoll, 2010).",
"The popularization of social media use in recent years might provide an opportunity for women to escape this dilemma.",
"According to the equalization theory, social media has changed the traditional power structures between politicians and the mass media; as a result, marginalized groups such as women may gain more control in impression management strategies by directly interacting with constituents on social media platforms like Twitter (Seidman, 2013; Vergeer, 2015; Jungherr, 2016; Fountaine, 2017).",
"Thus, politicians' self-promotion behavior on Twitter is worth investigating further.",
"However, there is scant research on content analysis of politicians' self-promotion on Twitter, although prior studies such as (Golbeck et al., 2010) and (Hemphill et al., 2013) have analyzed the topics of Congressional tweets.",
"In this research, we model Congresspeople's self-promotion on Twitter as an NLP problem.",
"We first manually annotated a corpus of 4,000 tweets as self-promoting or not, and then built a prediction model to identify self-promoting tweets.",
"This model was then used to analyze self-promotion tweets by Congress members from July 2017 to March 2021.",
"We seek answers to the following research questions: (1) To what extent can NLP models identify self-promotion tweets from Congress-people?",
"(2) Who performed more self-promotion on Twitter, men or women?",
"In communication theories, self-promotion is considered an important tactic for self-presentation (Goffman, 1959; Giacalone and Rosenfeld, 1986).",
"The self-presentation theory proposed by Jones and Pittman (1982) provided a taxonomy of self-presentation tactics, which defined five strategies with different goals: (1) self-promotion for presenting oneself as competent, (2) exemplification for moral worthiness, (3) ingratiation for likability, (4) intimidation, and (5) supplication for requesting help.",
"In this study we adopted Jones and Pittman's taxonomy, and defined self-promotion as a self-presentation tactic aiming to present oneself as competent.",
"The phenomenon of a gender gap in self-evaluation and self-promotion has been well documented in social science research.",
"Exley and Kessler (2019) found that, between equally high-performing men and women, women would self-evaluate more poorly despite evaluating others similarly regardless of gender.",
"Such a gender gap has been observed in traditionally male-dominated professions.",
"For example, many businesswomen were uncomfortable using impression management behaviors (Singh et al., 2002); women MBA graduates were less likely to utilize free-form data fields to promote themselves in their LinkedIn profiles (Altenburger et al., 2017); and women researchers made fewer self-citations than men (King et al., 2017).",
"Women politicians are of particular interest for the gender gap in self-promotion behavior in that their jobs require self promotion, especially during elections and re-elections.",
"Hence, female politicians often face this double bind of likeability vs. competence (Schneider et al., 2010).",
"Prior studies have found that self-presentation is a major motivation for social media use (Seidman, 2013).",
"For politicians around the world, Twitter has become a popular social media platform.",
"For example, Jackson and Lilleker (2011) characterized Twitter as a tool for impression management among members of the UK Parliament, with self-promotion being the most common among their identified purposes.",
"During the 2014 elections in Belgium and Spain, Coesemans and de Cock (2017) found that Twitter was not only used for professional political communication, but also for personal branding.",
"Interestingly, recent studies on female politicians' Twitter behavior found patterns that deviated from traditional gender norms.",
"For instance, female House candidates both tweeted more and possessed higher follower counts than their male counterparts in the 2012 election (Evans et al., 2014).",
"It appears that female politicians actively utilize Twitter, perhaps as a way to overcome other systemic obstacles.",
"They also campaigned with more neg-ative and attack-style tweets than men, which could potentially detract from their image in voters' eyes (Evans and Clark, 2016).",
"However, recent evidence appears to suggest that being seen as am-House Senate D R D R total Female 105 39 (144) 19 10 (29) 173 Male 167 266 (433) 37 55 (92) 525 (577) (121) 698 Table 1: Distribution of Congress members of class 115, 116, and 117 across chamber, party, and gender.",
"bitious might no longer adversely affect female candidates (Saha and Weeks, 2020).",
"Therefore, it is worthwhile to re-visit the gender gap in self-promotion among politicians on Twitter.",
"A data set containing Congress members' tweets from July 1 of 2017 to March 31 of 2021, a total of 45 months' worth of data, was collected from the publicly available repository of Alex Litel's Tweets of Congress project, 1 which includes daily tweets from members of the 115th 117th Congresses, including both Senate and House.",
"Besides the tweets, this data set also includes metadata for each Congressperson, such as chamber, party, and a bio ID. 2 Using the bio ID, we were able to link each Congressperson to his/her pro-file compiled by the @unitedstates project, 3 which includes demographic information such as gender and birthday for members of the US Congress since 1789.",
"After the data linkage, we obtained about 2 million tweets in totalretweets were excluded from 698 Congress members.",
"Table 1 provides a summary of the gender, chamber, and party of the Congress members.",
"Figure 1 shows the median of the number of tweets posted by members of Congress per month from July 2017 to March 2021.",
"Women consistently posted more tweets than men, in accordance with the finding in (Evans et al., 2014).",
"The overall trend for both genders is also consistent with major events that occurred during this period of time, con-firming the reliability of the data set; for example: (1) less tweets in August due to Congress recessing 1 https://github.com/alexlitel/congresstweets 2 https://github.com/alexlitel/congresstweets-automator/ blob/master/data/historical-users-filtered.json 3 https://github.com/unitedstates/congress-legislators Common types of self-promotion Examples (1) Sharing information about or soliciting participation in events featuring self I'm speaking with reporters live at the U.S. Capitol as the House continues its work to put #FamiliesFirst in America's response to the coronavirus pandemic. https://t.co/hDgusBJB1L (2) Talking about own work progress and accomplishments, such as introducing or passing bills, demonstrating authority, or acting in leadership positions As co-chair of the Medicare for All Congressional Caucus, I fight everyday for every American to access quality healthcare. We need Medicare for All. A patient identifier is a common sense way to reduce medical errors and save lives. Proud the House adopted my amendment this week. https://t.co/6jBfvgUIJc (3) Mentioning received recognitions, such as endorsements and awards I was honored to join @1SI Chamber today and receive the Spirit of Enterprise Award from the @USChamber https://t.co/FJ199jFm9G I am honored to have received the endorsement from the BRAFLCIO . Thank you to all the workers, retirees, and their families who truly are the voice and backbone of Florida's labor movement. #aflcio #union #local #fl20 https://t.co/S1hvT1pE6h Table 2: Common types of self-promotion tweets used by members of Congress.",
"for the month; (2) less tweets during year-end holidays; (3) a significant decrease right after the 2018 mid-term and the 2020 election; and (4) a significant increase in March 2020 due to the Covid-19 pandemic.",
"Following Jones and Pittman (1982)'s taxonomy, in this study we define self-promotion as a self-presentation tactic aiming to present oneself as competent.",
"To operationalize the defined concept of self-promotion, two annotators conducted iterative rounds of coding to identify self-promotion content in the tweets.",
"In each round, one hundred tweets were randomly selected and independently coded as either self-promotion or not by the annotators.",
"The disagreements were brought to group discussion.",
"After two rounds of discussion, a sample of 300 tweets was used to conduct an inter-coder agreement test.",
"The result shows an agreement level at 0.80, measured by Cohen's Kappa.",
"The two annotators then each annotated more tweets, resulting in a total of 4,003 annotated tweets, including 914 self-promotion and 3,089 non-self-promotion tweets.",
"We also summarized the three most common types of self-promotion tweets observed during annotation: (1) advertising events featuring self, (2) talking about own work progress or accomplishments, and (3) announcing received recognitions such as awards and endorsements.",
"See Table 2 for tweet examples.",
"In order to ensure that the training dataset contains a sufficient amount of self-promotion tweets, we over-sampled tweets that contain the word I (referred to as I-tweets ), based on the critical role of self-referencing in self-promotion (Coesemans and de Cock, 2017).",
"We found that over 30% of I-tweets contain self-promotion, while only about 10% of non-I-tweets contain self-promotion.",
"Therefore, although the original ratio of I-tweets vs. non-I-tweets is 0.37 to 1 in the data set, we sampled I-tweets and non-I-tweets by a ratio of 1.7 to 1, resulting in about 2,500 I-tweets and about 1,500 non-I-tweets in the annotated corpus.",
"In addition, to ensure that we have a representative sample, the 4,003 tweets were sampled with each member of Congress contributing at most 10 tweets to the sample.",
"We evaluated two machine learning models on our annotated corpus via 5-fold cross-validation.",
"One is LinearSVM, and the other one is BERT (Devlin et al., 2019).",
"The BERT model 4 achieved a score of macro-F1 at 0.890, and accuracy at 0.923 (see result details in Table 3).",
"In contrast, LinearSVM 5 4 Parameter settings for the BERT model: 3 epochs, learning rate=1e-5, max sequence length=128, cased BERT-base pretrained model 5 Parameter settings for the LinearSVM: scikit-learn Lin-earSVC, C=1 and tf-idf vectorization with parameters: 1/2/3 achieved a much lower score of macro-F1 at 0.652 and accuracy at 0.868.",
"The BERT model was cho-sen to be applied to identify all self-promotional tweets in the data set.",
"To help us understand the linguistic cues used in self-promoting tweets, we conducted an analysis with LIME a machine learning interpretation tool (Ribeiro et al., 2016).",
"We first sampled 5000 tweets, then for each tweet, we ran LIME (paired with the above fine-tuned BERT model) to find the most salient words (specifically, top 7 words with the highest weights).",
"This resulted in the following list of content words related to expressing self-promotion: bill, legislation, Tune, introduced, Act, proud, honored, bipartisan, joining, live , etc.",
"When adding the context in which these words occur, we found such phrases as:",
"1. I am proud to introduce / cosponsor / support / vote for a [bipartisan] bill / legislation",
"2. Be sure to tune in / I'm live now / I'm hosting a virtual town hall",
"3. I'm honored to have received / earned / be recognized by We also examined a sample of prediction errors to identify areas for future improvement.",
"The prediction model missed some self-promotion tweets that are implicit without direct attribution, such as the Case 1 in Table",
"4. An implicit self-promotion tweet may also attribute the credit to a group instead of oneself, or self-promote through someone else's words, such as a direct quote from a voter.",
"The prediction model also mistook some non-self-promotion tweets as self-promotion, due to linguistic similarity.",
"For example, in the Case 2 in Table 4, a Congress member attended a social event to demonstrate their moral worthiness rather than their competency.",
"The error analysis shows that more clarifying training examples may further improve the prediction model.",
"Case",
"1. I am always willing to stand up for what I believe in, but I will always do it as respectfully as possible and with a goal toward building the greatest power. This strategy is working and US-Progressives have more power than ever before. (Note: self-promotion, false negative prediction) Case",
"2. Yesterday, on the steps of the State Capitol in Sacramento, I joined hundreds in rallying against the state's latest water grab in the San Joaquin Valley. (Note: not self-promotion, false positive prediction) Table 4: Examples of the BERT model prediction error.",
"Applying the above trained BERT model to the 2 million tweets posted by the Congress members, we found that 16.7% of the tweets contained self-promotion.",
"To examine gender difference in self-promotion, we adopted a generalized linear mixed-effects regression framework, in which (1) the fixed-effects factors are gender ( F/M ), political party ( D/R ), chamber ( house/senate ), age, number of terms served in Congress, number of daily tweets (representing tweet frequency), and number of followers (preprocessed with log transformation due to its highly skewed distribution), (2) the random-effects factors are the author and the date of a tweet; and (3) the dependent variable is whether a tweet contains self-promotion, of which the value comes from the BERT prediction result.",
"We fed the 2 million observations of tweets into the mixed-effects model, using the glmer() function of the R package lme4 (Bates et al., 2014) see Appendix A for the detailed regression formula and Appendix C for the distribution of the four numerical factors.",
"Table 5 shows a significant gender difference when controlling for other factors: women in Congress are more likely to self-promote in their tweets than their men colleagues.",
"We are also interested in further examining whether this gender difference has been consistent over the time.",
"To answer this question, for each month from July 2017 to March 2021, we fit the monthly data to the mixed-effects model, and then from each monthly model we calculated the estimated marginal means or expected means.Specifically, we used the ggemmeans() Coef Std Err P-value gender [F] 0 .",
"function 6 in the R package ggeffects to do the calculation (Ludecke, 2018).",
"As shown in Fig. 2, we can see that women consistently exhibited more self-promotion than men.",
"In addition to the gender effect, Table 5 also shows other significant factors for self-promotion: (1) Senators are more likely to send self-promotion tweets than House Representatives; (2) young people self-promote more than old people; and (3) Congress members with fewer followers or those who tweet less frequently are more likely to do self-promotion.",
"While more research is needed for causal interpretations, these findings seem to be consistent with common sense knowledge.",
"As mentioned in Table 2 (common types of self-promotion tweets), most self-promotion tweets were advertising events or touting accomplishments, endorsements, and 6 For categorical factors such as party, ggemmeans() averages over their categories; for numerical factors such as age, their mean values (e.g., age=60) are used.",
"awards.",
"Since Senators represent the entire states, while members of the House represent individual districts, Senators are in general more politically powerful, and might be involved in more activities that they can use for self-promotion.",
"It is probably not surprising that younger members do more self-promotion on Twitter as they are more social media savvy.",
"The negative correlation between self-promotion and the tweet frequency (daily tweets) or number of followers indicates that for the members who are less active on Twitter or have fewer followers, self-promotion accounts for a larger proportion of their tweets, suggesting that their Twitter use is somewhat more focused on self-promotion.",
"Contribution.",
"We built an annotated corpus of self-promotion tweets posted by Congress members, and trained a BERT-based prediction model with 0.89 macro-F1 score.",
"To the best of our knowledge, this is the first NLP model for predicting self-promotion in political tweets.",
"Applying this model to 2 million Congressional tweets from July 2017 to March 2021, we found that 16.7% of Congressional tweets contained self-promotion.",
"After controlling for a number of factors we found women in Congress perform significantly more self-promotion on Twitter than their male colleagues.",
"This indicates a reversal of traditional gender norms where women self-promote less than men.",
"Limitations.",
"Although the data set we used is large and spans almost 4 years, more data are needed to evaluate whether the self-promotion prediction model is generalizable to politicians outside of the US Congress, such as those in other government branches (e.g. executive and judicial) and levels (e.g. states and counties), and other countries.",
"Based on our manual annotations, we would speculate that the model should be generalizable to some extent in that self-promotion content shares some common terms such as describing leadership roles and sharing news on awards and endorsement.",
"However, some self-promotional content may be domain-specific, e.g. accomplishment on introducing and passing bills is only applicable to legislators.",
"We thank the reviewers and Albert Wang for",
"for very helpful comments, help with annotation.",
"Libby Hemphill, Jahna Otterbacher, and Matthew Shapiro.",
"What's congress doing on twitter?",
"Nigel Jackson and Darren G. Lilleker.",
"2011.",
"Mi-croblogging, Constituency Service and Impression Management: UK MPs and the Use of Twitter.",
"The Journal of Legislative Studies , 17:86105.",
"Edward E Jones and Thane S Pittman.",
"1982.",
"Toward a general theory of strategic self-presentation , volume 1, chapter 9.",
"Lawrence Erlbaum Associates.",
"Andreas Jungherr.",
"2016.",
"Twitter use in election campaigns: A systematic literature review.",
"Journal of information technology & politics , 13(1):7291.",
"Molly M King, Carl T Bergstrom, Shelley J Correll, Jennifer Jacquet, and Jevin D West.",
"2017.",
"Men set their own cites high: Gender and self-citation across fields and over time.",
"Socius",
", 3. Daniel Ludecke.",
"2018.",
"ggeffects: Tidy Data Frames of Marginal Effects from Regression Models.",
"Journal of Open Source Software , 3(26):772.",
"Tyler G Okimoto and Victoria L Brescoll.",
"2010.",
"The price of power: Power seeking and backlash against female politicians.",
"Personality and Social Psychology Bulletin , 36(7):923936.",
"K. Proost, B. Schreurs, K. D. Witte, and Eva Der-ous.",
"2010.",
"Ingratiation and self-promotion in the selection interview: The effects of using single tactics or a combination of tactics on interviewer judgments.",
"Journal of Applied Social Psychology , 40:21552169.",
"Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin.",
"2016.",
"Why Should I Trust You?: Explaining the Predictions of Any Classifier.",
"In KDD'2016 , page 11351144.",
"Laurie A Rudman.",
"1998.",
"Self-promotion as a risk factor for women: The costs and benefits of counter-stereotypical impression management.",
"Journal of Personality and Social Psychology , 74(3):629645.",
"Sparsha Saha and Ana Catalano Weeks.",
"2020.",
"Ambitious women: Gender and voter perceptions of candidate ambition.",
"Political Behavior , pages 127.",
"Andrea Kupfer Schneider, Catherine H Tinsley, Sandra Cheldelin, and Emily T Amanatullah.",
"2010.",
"Likeability v. competence: The impossible choice faced by female politicians, attenuated by lawyers.",
"Duke Journal of Gender Law & Policy , 17(2):363384.",
"Gwendolyn Seidman.",
"2013.",
"Self-presentation and belonging on facebook: How personality influences social media use and motivations.",
"Personality and individual differences , 54(3):402407.",
"Val Singh, Savita Kumra, and Susan Vinnicombe.",
"2002.",
"Gender and impression management: Playing the promotion game.",
"Journal of Business Ethics , 37(1):7789."
] | [
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Conversational Question Simplification (CQS) aims to simplify self-contained questions into conversational ones by incorporating some conversational characteristics, e.g., anaphora and ellipsis.",
"Existing maximum likelihood estimation based methods often get trapped in easily learned tokens as all tokens are treated equally during training.",
"In this work, we introduce a Reinforcement Iterative Sequence Editing (RISE) framework that optimizes the minimum Levenshtein distance through explicit editing actions.",
"RISE is able to pay attention to tokens that are related to conversational characteristics.",
"To train RISE, we devise an Iterative Reinforce Training (IRT) algorithm with a Dynamic Programming based Sampling (DPS) process to improve exploration.",
"Experimental results on two benchmark datasets show that RISE significantly outperforms state-of-the-art methods and generalizes well on unseen data.",
"Conversational information seeking (CIS) (Zamani and Craswell, 2020; Ren et al., 2021b) has received extensive attention.",
"It introduces a new way to connect people to information through conversations (Qu et al., 2020; Gao et al., 2021; Ren et al., 2020).",
"One of the key features of CIS is mixed initiative behavior, where a system can improve user satisfaction by proactively asking clarification questions (Zhang et al., 2018; Aliannejadi et al., 2019; Xu et al., 2019), besides passively providing answers (Croft et al., 2010; Radlinski and Craswell, 2017; Lei et al., 2020).",
"Previous studies on asking clarification questions can be grouped into two categories: conversational question generation (Duan et al., 2017) and conversational question ranking (Aliannejadi et al., 2019).",
"The former directly generates conversational questions based on the dialogue context.",
"However, the generated questions may be irrelevant and meaningless (Rosset et al., 2020).",
"A lack of explicit semantic guidance makes it difficult to produce each question token from scratch while preserving relevancy and usefulness at the same time (Wang et al., 2018; Chai and Wan, 2020).",
"Instead, the latter proposes to retrieve questions from a collection for the given dialogue context, which can usually guarantee that the questions are relevant and useful (Shen et al., 2018; Rosset et al., 2020).",
"However, question ranking methods do not lead to a natural communication between human and machine (Pulman, 1995), as they neglect important characteristics in conversations, e.g., anaphora and ellipsis.",
"As shown in Fig. 1, the self-contained question (SQ4) lacks these characteristics, which makes it look unnatural.",
"Conversational Question Simplification (CQS).",
"Given a dialogue context and self-contained question as input, CQS aims to transform the self-contained question into a conversational one by simulating conversational characteristics, such as anaphora and ellipsis.",
"For example, in Fig. 1, four simplification operations are applied to obtain the conversational question (CQ4), which is context-dependent and superior to its origin one (SQ4) in terms of naturalness and conveying.",
"The reverse process, i.e., Conversational Question Rewriting (CQR) (Elgo-hary et al., 2019; Voskarides et al., 2020) which rewrites CQ4 into SQ4, has been widely explored in the literature (Vakulenko et al., 2020; Yu et al., 2020).",
"Although the proposed methods for CQR can be easily adopted for CQS, they do not always generate satisfactory results as they are all trained to optimize a maximum likelihood estimation (MLE) objective, which gives equal attention to generate each question token.",
"Therefore, they often get stuck in easily learned tokens, i.e., tokens appearing in input, ignoring conversational tokens, e.g., him, which is a small but important portion of output.",
"To address the above issue, we propose a new scheme for CQS, namely minimum Levenshtein distance (MLD).",
"It minimizes the differences between input and output, forcing the model to pay attention to contributing tokens that are related to conversational tokens, e.g., Ira Hay and him in Fig.",
"1. Therefore, MLD is expected to outperform MLE for CQS.",
"However, MLD cannot be minimized by direct optimization due to the discrete nature, i.e., minimizing the number of discrete edits.",
"We present an alternative solution, a R einforcement I terative S equence E diting (RISE) framework for the optimization of MLD.",
"We formulate RISE as a Hierarchical Combinatorial Markov Decision Process (HCMDP) consisting of an editing Markov Decision Process (MDP) to predict multiple edits for all tokens in the self-contained question, e.g., K eep (K)' to keep a token, and a phrasing MDP to predict a phrase if the edit is I nsert (I)' or S ubstitute (S)'.",
"We only have the self-contained and conversational question pairs in the dataset while the demonstrations of the editing iterations are lacked.",
"Thus, we cannot train each editing iteration of RISE with teacher forcing.",
"To this end, we devise an Iterative Reinforce Training (IRT) algorithm that allows RISE to do some exploration itself.",
"The exploration can be rewarded according to its Levenshtein distance (LD) with the demonstrated conversational question.",
"Traditional exploration methods like (cid:15) -sampling (Sutton and Barto, 1998) neglect the interdependency between edits for all tokens, resulting in poor exploration.",
"Thus, we further introduce a Dynamic Programming based Sampling (DPS) process that adopts a Dynamic Programming (DP) algorithm to track and model the interdependency in IRT.",
"Experiments on the CANARD (Elgohary et al., 2019) and CAsT (Dalton et al., 2019) datasets show that RISE significantly outperforms state-of-the-art methods and generalizes well to unseen data.",
"Given a dialogue context C representing the previous conversation utterances and the self-contained clarification question candidate x = { x 1 , . . . , x | x | } to be asked next (e.g., from a conversational question ranking model), the goal of Conversational Question Simplification (CQS) is to reformulate question x to a conversational question y = { y 1 , . . . , y | y | } by simulating conversational characteristics, e.g., anaphora and ellipsis.",
"A target conversational question y = { y 1 , . . . , y | y | } is provided during the training phase.",
"CQSA commonly adopted paradigm for tasks similar to CQS, e.g., CQR, is to model the task as a conditional sequence generation process parameterized by , which is usually optimized by MLE:",
"L = log p ( y | x, C ) = | y | (cid:88) t =1 log p ( y t | y <t , x, C ) , (1)",
"where y is the target question and y <t denotes the prefix y 1 , y 2 , . . . , y t 1 .",
"As we can see, MLE gives equal weight to each token and falls in easily learned tokens, the overwhelming duplicate tokens between x and y , while underestimating subtle differences of tokens related to conversational characteristics.",
"Inspired by Arjovsky et al. (2017), to minimize the distance between two distributions, we propose",
"to minimize the LD between the target question y and the model output y so as to leverage the high overlap between x and y and focus on subtle different tokens:",
"Unfortunately, it is impossible to directly optimize Eq.",
"2 because the LD between y and y is the minimum number of single-token edits (insertions, deletions or substitutions) required to change y into y , which is discrete and non-differentiable.",
"To optimize MLD in Eq.",
"2, we devise the Reinforcement Iterative Sequence Editing (RISE) framework, which reformulates the optimization of MLD as a Hierarchical Combinatorial Markov Decision Process (HCMDP).",
"Next, we first describe our HCMDP formulation of RISE.",
"We then detail the modeling of each ingredient in RISE.",
"Finally, we present the training process of RISE.",
"RISE produces its output y by iteratively editing x with four types of edit, i.e., K' to keep a token, D elete (D)' to delete a token, I' to insert a phrase (a sequence of tokens) after a token, and S' to substitute a phrase by a new one.",
"If a token is predicted as I' or S', we need to further predict a corresponding phrase.",
"Note that we only predict one phrase for successive S' edits.",
"We formulate RISE as a Hierarchical Combinatorial Markov Decision Process (HCMDP) consisting of (1) an editing MDP to predict multiple edits for all tokens, and (2) a phrasing MDP to predict a phrase if the edit is I' or S'.",
"The editing MDP can be formulated as a tuple (cid:104)S e , A e , T e , R , e (cid:105) .",
"Here, s e t S e denotes the question at t -th iteration y t together with the context C , i.e., s et = ( y t , C ) .",
"Note that s e 0 = ( x, C ) .",
"a et = [ a et, 1 , a et, 2 , . . . , a et, | y t | ] A e is a combinatorial action consisting of several interdependent edits.",
"The number of edits corresponds to the length of y t .",
"For example, in Fig. 2, a et = [K', K', K', K', S', S', K', K'].",
"In our case, the transition function T e is deterministic, which means that the next state s et +1 is obtained by applying the predicted actions from both the editing MDP and phrasing MDP to the current state s et .",
"r t R is the reward function, which estimates the joint effect of taking the predicted actions from both the editing and phrasing MDPs.",
"e is the editing policy network.",
"The phrasing MDP can be formulated as a tuple (cid:104)S p , A p , T p , R , p (cid:105) .",
"Here, s pt S p consists of the current question y t , the predicted action from the editing MDP a et , and the context C , i.e., s pt = ( y t , a et , C ) .",
"a pt = [ a pt, 1 , a pt, 2 , . . . ] A p is also a combinatorial action, where a pt,i denotes a phrase from a predefined vocabulary and i corresponds to the index of the I' or S' edits, e.g., in Fig. 2, a pt, 1 = him' is the predicted phrase for the first S' edit.",
"The length of the action sequence corresponds to the number of I' or S' edits.",
"The transition function T p returns the next state s pt +1 by applying the predicted actions from the phrasing MDP to the current state s pt .",
"r t R is the shared reward function.",
"p is the phrasing policy network.",
"RISE tries to maximize the expected reward: J ( ) = E a et e ,a p t p [ r t ] , (3) where is the model parameter which is optimized with the policy gradient: J ( ) = E a et e ,a pt p [ r t ( log e ( a et | s et ) + log p ( a pt | s pt ))] , (4) Next, we will show how to model e ( a et | s et ) , p ( a pt | s pt ) , and r t .",
"We implement the editing and phrasing policy networks ( e and p ) based on BERT2BERT (Rothe et al., 2020) as shown in Fig.",
"2. The editing policy network is implemented by the encoder to predict combinatorial edits, and the phrasing policy network is implemented by the decoder to predict phrases.",
"We unfold all tokens of the utterances in the context into a sequence C = ( w 1 , . . . , w c ) , where w i denotes a token and we add [SEP] to separate different utterances.",
"Then the context and input question in t -th iteration are concatenated with [SEP] as the separator.",
"Finally, we feed them into the encoder of BERT2BERT to obtain hidden representations for tokens in question H t = ( h t 1 , . . . , h t | y t | ) and apply a linear layer with parameter W e to predict a et : e ( a et | s et = ( y t , C )) = softmax( W e H t ) .",
"(5) ... ... him Editing policy Phrasing policy C r o ss A tt en t i on a pt ,1 Was anyone opposed to Ira Hayes revealing ...",
"We first extract the spans corresponding to the I' or S' edits from the question.",
"If the edit is I', the question span span ti consists of tokens before and after this insertion, i.e., span ti = [ y tj , y tj +1 ] ; if the edit is S', the question span span ti consists of successive tokens corresponding to the S' edit, i.e., span ti = [ y tj , . . . , y tk ] , where a et,j : k = S' and a et,k +1 (cid:54) = S'.",
"We only predict once for successive S' edits, e.g., in Fig. 2, the phrase him' is predicted to substitute question span [Ira, Hayes].",
"For the i -th I' or S' edit with a question span span ti , we concatenate the span and [CLS] token as input tokens, and feed them into the decoder of BERT2BERT to obtain a hidden representation of [CLS] token s ti .",
"We obtain S t by concatenating each s ti and predict the phrases for all S' and I' edits by a linear layer with parameter W p : p ( a pt | s pt ) = softmax( W p S t ) .",
"We devise the reward r t to estimate the effect of taking the joint action ( a et , a pt ) by encouraging actions that can result in low LD values between y t +1 and y , i.e., minimizing Eq.",
"2. Besides, we discourage those actions to achieve same y t +1 with extra non K' edits: r t = 1 1 + LD ( y t +1 , y ) (cid:32) l (cid:88) t ( a et (cid:54) = K' ) + 1 (cid:33) , l = LD ( y t , y ) LD ( y t +1 , y ) , (7) where 1 1+ LD ( y t +1 ,y ) will reward actions that result in low LD values between y t +1 and y and ( l (cid:80) t ( a et (cid:54) = K' )) will punish those actions with unnecessary non K' edits.",
"To train RISE, we need training samples in the form of a tuple ( s et , a et , s pt , a pt , r t ) .",
"However, we only have ( y 0 = x, y ) in our dataset.",
"Traditional exploration methods like (cid:15) -greedy sampling sample edits for all tokens independently, ignoring the interdependency between them.",
"Instead, we devise an Iterative Reinforce Training (IRT) algorithm to sample an edit for each token by considering its future expectation, i.e., sampling a et,i based on expectation of a et, : i 1 from i = | y t | to 1 .",
"We maintain a matrix M t for this expectation based on both y t and y , which is computed by a Dynamic Programming based Sampling (DPS) process due to the exponential number of edit combinations of a et, : i .",
"The details of IRT are provided in Alg.",
"1; it contains a DPS process that consists of two parts: computing the matrix M t (line 48) and sampling actions ( a et , a pt ) (line 10) based on M t .",
"Given ( y t , y ) with length m and n , we maintain a matrix M t R ( m +1) ( n +1) (including [SEP]', see the upper right part in Fig. 3) where each element M t i,j tracks the expectation of a e t, : i to convert y t : i to y : j :",
"M ti,j = E p i,j ( a et,i ) [ E p ( a et, : i 1 ) y t : i >y : j ( a et, : i )] = E p i,j ( a et,i ) e ( a et,i | y t , C ) M t i 1 ,j 1 , if a e t,i = K' M ti 1 ,j , if a et,i = D' M ti,j 1 , if a et,i = I' M ti 1 ,j 1 , if a et,i = S' , (8)",
"where a et, : i is the combinational edits for tokens y t : i and e ( a et,i | y t , C ) is calculated by Eq.",
"5 (see the upper left part in Fig. 3).",
"M t 0 , 0 is initialized to 1 .",
"We will first introduce p i,j ( a et,i ) and then introduce y t : i >y : j ( a et, : i ) in Eq.",
"8.",
"Traditional sampling methods sample each edit a et,i independently, based on model likelihood e ( a et,i | y t , C ) .",
"Instead, we sample each edit with probability p i,j ( a et,i ) based on edits expectation M t , which is modeled as: p i,j ( a et,i ) = 1 Z ti,j ( a et,i | y t , C ) M ti 1 ,j 1 , if a et,i = K' M ti 1 ,j , if a et,i = D' M ti,j 1 , if a et,i = I' M ti 1 ,j 1 , if a et,i = S' , (9) where Z ti,j is the normalization term.",
"We give an example on computing M t 1 , 2 in the bottom part of Fig.",
"3. For edit I' in M t 1 , 2 , its probability is 1, and its value is e ( a et,i = I' | y t , C ) M t 1 , 1 = 0 .",
"008 .",
"For the other edits, the probability is",
"0. Therefore, M t 1 , 2 = 0 .",
"008 .",
"y t : i >y : j ( a et, : i ) is the probability of conducting edits a et, : i to convert y t : i to y : j : y t : i >y : j ( a et, : i ) = e ( a et,i | y t , C ) y t : i 1 >y : j 1 ( a et, : i 1 ) , if a et,i 1 = K' y t : i 1 >y : j ( a et, : i 1 ) , if a et,i 1 = D' y t : i >y : j 1 ( a et, : i ) , if a et,i = I' y t : i 1 >y : j 1 ( a et, : i 1 ) , if a et,i 1 = S' , (10) To convert y t : i to y : j , we need to make sure that y ti can convert to y j and that y t : i 1 can convert to y : j 1 , which can be calculated recursively.",
"Note that we only allow S' and D' for y ti when y ti (cid:54) = y j and K' and I' for y ti when y ti = y j .",
"And M ti 1 ,j 1 = E p ( a et, : i 1 ) y t : i 1 >y : j 1 ( a et, : i 1 ) .",
"We sample ( a et , a pt ) based on matrix M t by backtracking from i = m, j = n .",
"For example, as shown in the upper right in Fig. 3, we backtrack along the blue arrows.",
"In this truncated sample, we start from M t 7 , 6 , sample an edit K' to keep reveal-ing' based on p 7 , 6 ( a et, 7 ) in Eq.",
"9, and move to M t 6 , 5 .",
"Then, we sample S' to substitute Ira Hayes' to him' and move to M t 4 , 4 .",
"Finally, we sample K' Algorithm 1: Training Process of RISE Input: The origin data D = { ( x, y ) } , the number of samples L ; Output: The model parameters ; 1 while not coverage do 2 Sample ( y t , y ) from D ; 3 M t 0 , 0 = 1 ; 4 for i in 0,.",
"in [ M t 4 , 4 , M t 3 , 3 , M t 2 , 2 M t 1 , 1 , M t 0 , 0 ] to keep [to', op-posed', anyone', Was', [SEP]'].",
"Therefore, we can obtain a et = [K, K, K, K, K, S, S, K], a p t = [him'].",
"Note that we obtain a pt by merging all corresponding tokens y j as the phrase for each I' edit and successive S' edits and we only substitute once.",
"The backtracking rule can be formulated as: M ti,j M ti 1 ,j 1 , if a et,i [ K' , S' ] M ti 1 ,j , if a et,i = D' M ti,j 1 , if a et,i = I' .",
"During inference, RISE iteratively edits x until it predicts K' edits for all tokens or it achieves the",
"maximum iteration limit.",
"For example, for editing iteration t in Figure 2, it predicts S' for Ira' and Hayes' to substitute it to him' and K' for other tokens, which results in Was anyone opposed to him revealing . . . ' as output.",
"The output in iteration t is the input of iteration t + 1 .",
"The actual editing iteration times vary with different samples.",
"As with previous studies (Elgohary et al., 2019; Yu et al., 2020; Vakulenko et al., 2020; Lin et al., 2020a), we conduct experiments on the CANARD 1 (Elgohary et al., 2019) dataset, which is a large open-domain dataset for conversational question answering (with over 30k training sam-ples).",
"Each sample in the CANARD dataset includes a conversational context (historical questions and answers), an self-contained question, and its corresponding conversational question under the context.",
"The questions always have clear answers, e.g., Did he win the lawsuit?' We follow the CANARD splits for training and evaluation.",
"In addition, we evaluate the model performance on the CAsT 2 dataset (Dalton et al., 2019), which is built for conversational search.",
"Different from CANARD, its context only contains questions without corresponding answers.",
"Besides, most questions in the CAsT dataset are exploring questions to explore relevant information, e.g., What about for great whites?' Since the CAsT dataset only contains 479 samples from different domains compared to CANARD, we use it for testing.",
"Following Su et al. (2019); Xu et al. (2020), we use BLEU-1, BLEU-2, BLEU-3, BLEU-4 (Pap-ineni et al., 2002), ROUGE-L (Lin, 2004), and CIDEr (Vedantam et al., 2015) for automatic evaluation.",
"BLEUn and ROUGE-L measure the word overlap between the generated and golden questions.",
"CIDEr measures the extent to which important information is missing.",
"Elgohary et al. (2019); Lin et al. (2020a); Xu et al. (2020) have shown that automatic evaluation has a high correlation with human judgement on this task, so we do not conduct human evaluation in this paper.",
"We compare with several recent state-of-the-art methods for this task or closely related tasks: Origin uses the original self-contained question",
"as output.",
"Rule (Yu et al., 2020) employs two simple rules to mimic two conversational characteristics: anaphora and ellipsis.",
"QGDiv (Sultan et al., 2020) uses RoBERTa (Liu et al., 2019) with beam search (Wiseman and Rush, 2016) for generation.",
"Trans++ (Vakulenko et al., 2020) predicts several word distributions, and combines them to obtain the final word distribution when generating each token.",
"QuerySim (Yu et al., 2020) adopts a GPT-2 (Radford et al., 2019) model to generate conversational question.",
"We also found some methods from related tasks.",
"But they do not work on this task for various reasons.",
"For example, due to the lack of labels needed for training, we cannot compare with the methods proposed by Rosset et al. (2020) and Xu et al. (2020).",
"Su et al. (2019) propose a model that can only copy tokens from input; it works well on the reverse task (i.e., CQR), but not on CQS.",
"We use BERT2BERT for the modeling of the editing and phrasing parts (Rothe et al., 2020), as other pretrained models like GPT-2 (Radford et al., 2019) cannot work for both.",
"The hidden size is 768 and phrase vocabulary is 3461 following (Malmi et al., 2019).",
"We use the BERT vocabulary (30,522 tokens) for all BERT-based or BERT2BERT-based models.",
"We use the Adam optimizer (learning rate 5e-5) (Kingma and Ba, 2015) to train all models.",
"In particular, we train all models for 20,000 warm-up steps, 5 epochs with pretrained model parameters frozen, and 20 epochs for all parameters.",
"For RISE, the maximum editing iteration times is set to",
"3. We use gradient clipping with a maximum gradient norm of 1.0.",
"We select the best models based on the performance on the validation set.",
"During inference, we use greedy decoding for all models.",
"We list the results of all methods on both CANARD and CAsT in Table",
"1. From the results, we have two main observations.",
"First, RISE significantly outperforms all base-Table 1: Overall performance (%) on CANARD and CAsT.",
"Bold face indicates the best results in terms of the corresponding metrics.",
"Significant improvements over the best baseline results are marked with (t-test, p < 0 . 01 ).",
"Note that we denote BLEUn as Bn and ROUGE-L as R-L.",
"lines on both datasets.",
"Specifically, RISE outperforms the strongest baseline QuerySim by 4% in terms of ROUGE-L.",
"The reason is that RISE enhanced by DPS has a better ability to emphasize conversational tokens, rather than treating all tokens equally.",
"Second, RISE is more robust, which generalizes better to unseen data of CAsT.",
"The results of the neural methods on CANARD are much better than those on CAsT.",
"But, RISE is more stable than the other neural models.",
"For example, RISE outperforms QuerySim by 0.6% in BLEU-4 on CANARD, while 1.3% on CAsT.",
"The reason is that RISE learns to cope with conversational tokens only, while other models need to generate each token from scratch.",
"To analyze where the improvements of RISE come from, we conduct an ablation study on the CANARD and CAsT datasets (see Table 2).",
"We consider two settings: -DPS.",
"Here, we replace DPS by (cid:15) -greedy sampling ( (cid:15) = 0 .",
"RISE.",
"The results show that both parts (DPS and MLD) are helpful to RISE as removing either of them leads to a decrease in performance.",
"Without MLD, the performance drops a lot in terms of all metrics, e.g., 3% and 7% in BLEU-4 on CANARD and CAsT, respectively.",
"This indicates that optimizing MLD is more effective than optimizing MLE.",
"Besides, MLD generalizes better on unseen CAsT as it drops slightly in all metrics, while with MLE, we see a drop of 10% in BLEU-1.",
"Without DPS, the results drop dramatically, which indicates that DPS can do better exploration than (cid:15) -greedy and is of vital importance for RISE.",
"For example, -DPS tends to sample more non K' edits (RISE vs -DPS: 10% vs 22% on CANARD), which is redundant and fragile.",
"The performance of -DPS is even worse than Origin in CAsT in BLEU-4.",
"This may be because CAsT is unseen.",
"To analyze the relation between the number of editing iterations of RISE and the editing difficulty, we plot a heatmap in Fig. 4, where the deeper color represents a larger number of editing iterations.",
"The x-axis denotes the number of tokens shown in input x but not shown in output y and the y-axis denotes the number of tokens shown in y but not in x .",
"As the number of different tokens between x and y increases, the number of editing iterations increases too.",
"For example, when the y-axis is 1, as the x-axis ranges from 1 to 10, the number of Table 2: Ablation study (%) on CANARD and CAsT.",
"editing iterations increases from 1.2 to 2.6 because more D' edits are needed.",
"We also found that when the x-axis is between 3 and 7 and the y-axis is between 1 and 4, only 12 editing iterations are needed.",
"Usually, this is because RISE only needs 1 or 2 successive S' edits for simulating anaphora.",
"The overall performance of RISE improves as the number of editing iterations increases.",
"RISE achieves 70.5% in BLEU-4 in the first iteration (even worse than QuerySim in Table 1) but 71.5% and 71.6% in the second and third iterations.",
"This shows that some samples are indeed more difficult to be directly edited into conversational ones, and thus need more editing iterations.",
"Even though it will not hurt the performance a lot, more editing iterations are not always helpful.",
"About 5% of the samples achieve worse BLEU-4 scores as the number of editing iterations increases.",
"For example, RISE edits where did humphrey lyt-telton go to school at?' into where did he go to school at?' in the first iteration, which is perfect.",
"But RISE continues to edit it into where did he go to school?' in the second iteration, which is undesirable.",
"This is because RISE fails to decide whether to stop or continue editing.",
"In Table 3 we present two examples of the output of RISE.",
"We present the context, the original self-contained question, the target conversational question, and the output of RISE in the n -th iteration, denoted as Context', Question', Target' and Rewrite#n', respectively.",
"We have two main observations.",
"First, it is helpful to edit iteratively.",
"As shown in Example 1, RISE first replaces Abu' as he' in the first iteration and then deletes bakr' in the second iteration, which simulates anaphora by editing twice.",
"In Example 2, RISE simulates el-Table 3: Examples generated by RISE on CANARD.",
"Here, Question' means the self-contained question, and Target' means the desired conversational question.",
"Rewrite#n' denotes the output of RISE in n-th iteration.",
"lipsis by deleting multiple words and achieves poor grammar after the first iteration but corrects this by deleting some of the leftover words.",
"RISE may have learned to check the grammar and remove redundant words.",
"Second, RISE can simulate more conversational characteristics than human, and sometimes it can achieve a better result, sometimes not.",
"As we can see, RISE results a better conversational question by additionally simulating anaphora for Abu Bakr' in Example",
"1. However, RISE leaves out necessary information in Example",
"2. Here, RISE tries to simulate conversational characteristics as much as possible, where the result may be uncontrollable.",
"Studies on asking conversational question can be divided into two categories: conversational question generation and conversational question ranking .",
"Conversational question generation aims to directly generate conversational questions conditioned on the dialogue context (Sultan et al., 2020; Ren et al., 2021a).",
"Zamani et al. (2020) and Qi et al. (2020) define a question utility function to guide the generation of conversational questions.",
"Nakanishi et al. (2019); Jia et al. (2020) incorporate knowledge with auxiliary tasks.",
"These methods may generate irrelevant questions due to their pure generation nature.",
"Conversational question ranking (Aliannejadi et al., 2019) retrieves questions from a collection based on the given context, so the questions are mostly relevant to the context.",
"Kundu et al. (2020) propose a pair-wise matching network between context and question to do question ranking.",
"Some studies also use auxiliary tasks to improve ranking performance, such as Natural Language Inference (Kumar et al., 2020) and relevance classifica-tion (Rosset et al., 2020).",
"The retrieved questions are often unnatural without considering the conversational characteristics, e.g., anaphora and ellipsis.",
"CQS rewrites the retrieved self-contained questions into conversational ones by incorporating the conversational characteristics.",
"Existing applicable methods for CQS are all MLE based (Xu et al., 2020; Yu et al., 2020; Lin et al., 2020b; Vakulenko et al., 2020), which often get stuck in easily learned tokens as each token is treated equally by MLE.",
"Instead, we propose a MLD based RISE framework to formulate CQS as a HCMDP, which is able to discriminate different tokens through explicit editing actions, so that it can learn to emphasize the conversational tokens and generate more natural and appropriate questions.",
"In this paper, we have proposed a minimum Levenshtein distance (MLD) based Reinforcement Iterative Sequence Editing (RISE) framework for Conversational Question Simplification (CQS).",
"To train RISE, we have devised an Iterative Reinforce Training (IRT) algorithm with a novel Dynamic Programming based Sampling (DPS) process.",
"Extensive experiments show that RISE is more effective and robust than several state-of-the-art CQS methods.",
"A limitation of RISE is that it may fail to decide whether to stop or continue editing and leave out necessary information.",
"In future work, we plan to address this issue by learning a reward function that considers the whole editing process through adversarial learning (Goodfellow et al., 2014).",
"To facilitate the reproducibility of the results, we share the codes of all methods at https://github.",
"com/LZKSKY/CaSE_RISE .",
"We thank the reviewers for their valuable feedback.",
"This research was partially supported by the National Key R&D Program of China with grant No. 2020YFB1406704, the Natural Science Foundation of China (61972234, 61902219, 62072279), the Key Scientific and Technological Innovation Program of Shandong Province (2019JZZY010129), the Tencent WeChat Rhino-Bird Focused Research Program (JR-WXG-2021411), the Fundamental Research Funds of Shandong University, and the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, https: //hybrid-intelligence-centre.nl .",
"All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors."
] | [
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other"
] |
[
"Task-oriented dialogue systems are increasingly prevalent in healthcare settings, and have been characterized by a diverse range of architectures and objectives.",
"Although these systems have been surveyed in the medical community from a non-technical perspective, a systematic review from a rigorous computational perspective has to date remained noticeably absent.",
"As a result, many important implementation details of healthcare-oriented dialogue systems remain limited or under-specified, slowing the pace of innovation in this area.",
"To fill this gap, we investigated an initial pool of 4070 papers from well-known computer science, natural language processing, and artificial intelligence venues, identifying 70 papers discussing the system-level implementation of task-oriented dialogue systems for healthcare applications.",
"We conducted a comprehensive technical review of these papers, and present our key findings including identified gaps and corresponding recommendations.",
"Dialogue systems 1 have a daily presence in many individuals' lives, acting as virtual assistants (Hoy, 2018), customer service agents (Xu et al., 2017), or even companions (Zhou et al., 2020).",
"While some systems are designed to conduct unstructured conversations in open domains ( chatbots ), others ( task-oriented dialogue systems ) help users to complete tasks in a specific domain (Jurafsky and Martin, 2009; Qin et al., 2019).",
"Task-oriented dialogue systems can potentially play an important role in health and medical care (Laranjo et al., 2018), and they have been adopted by growing numbers of patients, caregivers, and clinicians (Kearns et al., 2019).",
"Nonetheless, there remains a translational 1 We follow an inclusive definition of dialogue systems , encompassing any intelligent systems designed to converse with humans via natural language.",
"gap (Newman-Griffis et al., 2021) between cutting-edge, foundational work in dialogue systems and prototypical or deployed dialogue agents in healthcare settings.",
"This limits the proliferation of scientific progress to real-world systems, constraining the potential benefits of fundamental research.",
"We move towards closing this gap by conducting a comprehensive, scientifically rigorous analysis of task-oriented healthcare dialogue systems.",
"Our underlying objectives are to",
"(a) explore how these systems have been employed to date, and",
"(b) map out their characteristics, shortcomings, and subsequent opportunities for follow-up work.",
"Importantly, we seek to address the limitations of prior systematic reviews by extensively investigating the included systems from a computational perspective.",
"Our primary contributions are as follows:",
"1. We systematically search through 4070 papers from well-known technical venues and identify 70 papers fitting our inclusion criteria.",
"2 2. We analyze these systems based on many factors, including system objective, language, architecture, modality, device type, and evaluation paradigm, among others.",
"3. We identify common limitations across systems, including an incomplete exploration of architecture, replicability concerns, ethical and privacy issues, and minimal investigation of usability or engagement.",
"We offer practical suggestions for addressing these as an on-ramp for future work.",
"In the long term, we hope that the gaps and opportunities identified in this survey can stimulate more rapid advances in the design of task-oriented healthcare dialogue systems.",
"We also hope that the survey provides a useful starting point and synthesis of prior work for NLP researchers and practi-2 A full listing of these papers is provided in the appendix.",
"Dialogue systems in healthcare have been the focus of several recent surveys conducted by the medical and clinical communities (Vaidyam et al., 2019; Laranjo et al., 2018; Kearns et al., 2019).",
"These surveys have investigated the real-world utilization of deployed systems, rather than examining their design and implementation from a technical perspective.",
"In contrast, studies examining these systems through the lens of AI and NLP research and practice have been limited.",
"Zhang et al. (2020) and Chen et al. (2017) presented surveys of recent advances in general-domain task-oriented dialogue systems.",
"Although they provide an excellent holistic portrait of the subfield, they do not delve into aspects of particular interest in healthcare settings (e.g., system objectives doubling as clinical goals), limiting their usefulness for this audience.",
"Vaidyam et al. (2019), Laranjo et al. (2018), and Kearns et al. (2019) conducted systematic reviews of dialogue systems deployed in mental health (Vaidyam et al., 2019) or general healthcare (Laranjo et al., 2018; Kearns et al., 2019) settings.",
"Vaidyam et al. (2019) examined 10 articles, and Laranjo et al. (2018) and Kearns et al. (2019) examined 17 and 46 articles, respectively.",
"All surveys were written for a medical audience and focused on healthcare issues and impact, covering few articles from AI, NLP, or general computer science venues.",
"Montenegro et al. (2019) and Tudor Car et al. (2020) recently reviewed 40 and 47 articles, respectively, covering conversational agents in the healthcare domain.",
"These two surveys are the closest to ours, but differ in important ways.",
"First, our focus is on a specific class of conversational agents: task-oriented dialogue systems.",
"The surveys by Montenegro et al. (2019) and Tudor Car et al. (2020) used a wider search breading their ability to provide extensive technical depth.",
"We also reviewed more papers (70 articles), which were then screened using a more thorough taxonomy as part of the analysis.",
"Some aspects that we considered that differ from these prior surveys include the overall dialogue system architecture, the dialogue management architecture, the system evaluation methods, and the dataset(s) used when developing and/or evaluating the system.",
"We designed search criteria in concert with our goal of filling a translational information gap between fundamental dialogue systems research and applied systems in the healthcare domain.",
"To do so, we retrieved articles from well-respected computer science, AI, and NLP databases and screened them for focus on task-oriented dialogue systems designed for healthcare settings.",
"Our target databases were: (1) ACM, 3 (2) IEEE, 4 (3) the ACL Anthology, 5 and (4) the AAAI Digital Library.",
"6 ACM and IEEE are large databases of papers from prestigious conferences and journals across many CS fields, including but not limited to robotics, human-computer interaction, data mining, and multimedia systems.",
"The ACL Anthology is the premier database of publications within NLP, hosting papers from major conferences and topic-specific venues (e.g., SIGDIAL , organized by the Special Interest Group on Discourse and Dialogue).",
"The AAAI Digital Library hosts papers not only from the AAAI Conference on Artificial Intelligence , but also from other AI conferences, AI Magazine , and the Journal of Artificial Intelligence Research .",
"We applied the following inclusion criteria when identifying papers: The main focus must be on the technical design or implementation of a task-oriented dialogue system.",
"The system must be designed for health-related applications.",
"The article must not be dedicated to one specific module of the system's architecture (e.g., 3 https://dl.acm.org/ 4 https://ieeexplore.ieee.org/ 5 https://www.aclweb.org/anthology/ 6 https://aaai.org/Library/library.php 6639 the natural language understanding component of a health-related dialogue system).",
"Although a narrower scopee.g., developing improved methods for slot-fillingis common when publishing in the dialogue systems community, these papers tend to place more emphasis on technical design irrespective of application context, offering less coverage of the system-level characteristics that are the target of this survey.",
"We followed four steps in our screening process.",
"First ( Initial Search) , we applied a predefined search query to the databases to populate our initial list of papers.",
"To generate the query, we used the keywords task-oriented, dialogue system, conversational agent, health, and healthcare, and synonyms and abbreviations of these keywords.",
"We short-listed papers using these keywords individually as well as in combination with one another.",
"Next ( Title Screening ), we performed a preliminary screening through the initial list of papers by reading the titles, keeping those that satisfied the inclusion criteria.",
"Then ( Abstract Screening ), we went through the list of papers remaining after the title screening and read the abstracts, keeping those that satisfied the inclusion criteria.",
"Lastly ( Final Screening ), we read the body of the papers remaining after the abstract screening and kept those that satisfied the inclusion criteria.",
"These funnel filtering processes were conducted by a computer science graduate student (a fluent L2 English speaker) using predefined search and screening guidelines.",
"Questions or uncertainties regarding a paper's compliance with inclusion criteria were forwarded along to the senior project lead (a computer science professor and fluent L1 English speaker with expertise in NLP) and final consensus was reached via discussion among the two parties.",
"We detail the number of papers remaining after each screening step in Table",
"1. Overall, this screening process combined with our subsequent surveying methods spanned eight months, covering papers published prior to January 2021.",
"In total, 70 papers (21 from ACM, 31 from IEEE, 16 from ACL, and 2 from AAAI 7 ) satisfied the inclusion criteria.",
"We survey papers meeting our inclusion criteria according to a wide range of parameters, and present our findings in the following 7 Papers about task-oriented dialogue systems published at AAAI often focus on one specific component of the system from a technical perspective, rather than proposing a conversational agent as a whole.",
"Therefore, only two papers from the AAAI Digital Library satisfied the inclusion criteria.",
"subsections, grouped into thematic categories: ontology (4), system architecture (5), system design (6), dataset (7), and system evaluation (8).",
"We map each paper to its domain of research (4.1), system objective (4.2), target audience (4.3), and language (4.4), and present our findings.",
"Task-oriented dialogue systems can potentially impact many facets of healthcare in society (Bick-more and Giorgino, 2004).",
"We define a domain of research as the healthcare area in which the system operates.",
"We identify both broad domains and more specific subcategories thereof based on the systems surveyed, outlined in Figure",
"1. Broad domain categories include mental health , physical health , health information , patient assistance , physician assistance , cognitive or developmental health , and other (comprising subcategories not easily classifiable to one of the broader domains).",
"Systems in the mental health domain supported individuals with mental or psychological health conditions, and systems in the cognitive or developmental health domain were a close analogue for individuals with conditions impacting memory, executive, or other cognitive function.",
"Systems in the physical health domain were targeted towards individuals with specific physical health concerns, including infectious (e.g., Covid-19), noninfectious (e.g., cancer), and temporary (e.g., preg-6640 System Objective # Papers Diagnosis 7 Monitoring 8 Intervention 13 Counseling 5 Assistance 12 Multi-Objective 25 Table 2: Distribution of system objectives across the surveyed papers. Additional details regarding multi-objective papers are provided in the appendix. nancy) conditions.",
"Systems providing health information performed general-purpose actions such as offering advice or suggesting disease diagnoses.",
"Finally, systems performing patient assistance or physician assistance supported specific patientor physician-focused healthcare tasks.",
"Dialogue systems designed for mental health , physical health , and health information were the most prevalent, covering 51 of the 70 included papers.",
"Task-oriented dialogue systems define value relative to the goals of a target task.",
"We define the system objective as the healthcare task for which a system is designed.",
"Some system objectives may be closely aligned with a single domain, whereas others may occur in many different domains (e.g., monitoring mental, physical, or cognitive condi-tions).",
"Thus, although the domain of research and system objective may frequently correlate, there is not by necessity a direct association.",
"Included systems were categorized as being designed to: diagnose a health condition (e.g., by predicting whether the user suffers from cognitive de-cline); monitor user states (e.g., by tracking their diets or periodically checking their mood); intervene by addressing users' health concerns or improving their states (e.g., by teaching children how to map facial expressions to emotions); counsel users without providing any direct intervention (e.g., by listening to users' concerns and empathizing with them); or assist users by providing information or guidance (e.g., by answering questions from users who are filling out forms).",
"Many systems were also categorized as multi-objective , meaning that they were designed for more than one of those goals.",
"signed for more than one target objective.",
"Among multi-objective systems, those that were designed for both diagnosis and assistance had the highest frequency (7/25); we provide additional details regarding these systems in Table 8 of the appendix.",
"Separately, we also considered the role of engagement as an objective of each system.",
"We define this as a goal of engaging target users in interaction, irrespective of underlying health goals.",
"Engagement may be of particular interest in healthcare settings since it can be critical in encouraging adoption or adherence with respect to healthcare outcomes (Montenegro et al., 2019).",
"Surprisingly, almost 60% of the papers (41 of the 70 surveyed) did not mention any goals pertaining to engaging users in more interactions.",
"The final consumers of healthcare systems often fall into three groups: patients , caregivers , and clinicians .",
"Table 3 shows the number of systems surveyed that focus on each category.",
"We find that out of 70 task-oriented dialogue systems, 59 are designed specifically for patients.",
"Most general-domain dialogue systems research has been conducted in English and other high-resource languages (Artetxe et al., 2020).",
"Expanding language diversity may extend the benefits of health-related dialogue systems more globally.",
"As shown in Figure 2, among the systems included in our review a majority (56%) are designed for English speakers.",
"Encouragingly, several of the included systems did focus on lower-resource languages, including Telugu (Duggenpudi et al., 2019), Bengali (Rahman et al., 2019), and Setswana (Grover et al., 2009).",
"We investigate both the general architecture of the system (5.1), and if applicable, the dialogue man-6641",
"Task-oriented dialogue systems are generally designed using pipeline or end-to-end architectures.",
"Pipeline architectures typically consist of separate components for natural language understanding, dialogue state tracking, dialogue policy, and natural language generation.",
"The ensemble of the dialogue state tracker and dialogue policy is the dialogue manager (Chen et al., 2017).",
"End-to-end architectures train a single model to produce output for a given input, often interacting with structured external databases and requiring extensive training data (Chen et al., 2017).",
"As shown in Table 4, only 2.85% of papers (2 of the 70 surveyed) implemented an end-to-end system; this is unsurprising given the limited training data available in most healthcare domains.",
"We also found that 14% (10 papers) did not directly specify the architecture of their developed system.",
"Unlike other pipeline components that impact user experience and engagement but not fundamental decision-making, the dialogue manager is central to overall functionality (Zhao et al., 2019); thus,",
"we afford it special attention.",
"In rule-based approaches, the system interacts with users based on a predefined set of rules, with success conditioned upon coverage of all relevant cases (Siangchin and Samanchuen, 2019).",
"Intent-based approaches seek to extract the user's intention from the dialogue, and then perform the relevant action (Jurafsky and Martin, 2009).",
"In hybrid dialogue management architectures, the system leverages a combination of rule-based and intent-based approaches, and fi-nally corpus-based approaches mine the dialogues of human-human conversations and produce responses using retrieval methods or generative methods (Jurafsky and Martin, 2009).",
"As shown in Table 5, among papers reporting on dialogue management architecture, we observe a fairly even mix of rule-based, intent-based, and hybrid architectures.",
"Modality, the channel through which information is exchanged between a computer and a human (Karray et al., 2008), can play an important role in dialogue quality and user satisfaction (Bilici et al., 2000).",
"Unimodal systems use a single modality for information exchange, whereas multimodal systems use multiple modalities (Karray et al., 2008).",
"Systems reviewed in this survey operated using one or more of several modalities.",
"In text-based or spoken interaction, users interact with the system by typing or speaking, respectively.",
"In interaction via graphical user interface (GUI) , users interact with the system through the use of visual elements.",
"In general, multimodal dialogue systems can be flexible and robust, but especially challenging to implement in the medical domain (Sonntag et al., 2009).",
"We find that 49 papers describe unimodal systems and 21 describe multimodal systems.",
"Ta-6642 Unimodal Multimodal Category # Papers Category # Papers Text 23 Spoken + Text 14 Spoken 25 Spoken + GUI 4 GUI 1 Text + GUI 3 Table 6: Distribution of modality type across the unimodal (49 total, left) and multimodal (21 total, right) systems surveyed.",
"ble 6 provides more details regarding their distribution across modalities.",
"Dialogue systems may facilitate interaction using a variety of devices (Arora et al., 2013), ranging from telephones (Garvey and Sankaranarayanan, 2012) to computers (McTear, 2010) to any other technology that allows interaction (e.g., VR-based avatars (Brinkman et al., 2012b; McTear, 2010)).",
"We categorized the included systems as mobile , telephone , desktop/laptop , in-car , PDA , robot , virtual environment , or virtual reality (including virtual agents and avatars) systems, considering systems as multi-device if they leveraged multiple devices for interaction.",
"As shown in Figure 3, we found that multi-device and mobile-based dialogue systems were most popular.",
"Table 9 in the appendix provides additional details regarding multi-device systems.",
"Data is crucial for effective system development (Serban et al., 2015), but many datasets for training dialogue systems are smaller than those used for other NLP tasks (Lowe et al., 2017).",
"This is even more pronounced in the healthcare domain, in part due to the risk of data misuse by others or the lack of data sharing incentives (Lee and Yoon, 2017).",
"We reviewed each paper for information regarding the data used during system development, focusing on dataset size, availability, and privacy-preserving measures.",
"Only 20 papers provide details about the data used (two papers provided a link to the dataset, and the remaining 18 discussed the dataset size).",
"Unfortunately, the remaining papers did not provide rationale for their lack of data or other replicability information.",
"Our assumption is that often the data contained sensitive information, preventing authors from releasing specific details, but only 19 of the 70 included papers provided information about data-related privacy or ethical considerations.",
"Only 10 mentioned Institutional Review Board (IRB) approval for their dataset and/or task, despite IRB (or equivalent) review being a crucial step towards ensuring that research is conducted ethically and in such a way that protects human subjects to the extent possible (Amdur and Biddle, 1997).",
"We examined the means through which systems were evaluated both qualitatively and quantitatively (Deriu et al., 2019; Hastie, 2012).",
"We defined human evaluation , often implemented in prior work through questionnaires (Grover et al., 2009; Holmes et al., 2019; Parde and Nielsen, 2019; Wang et al., 2020) or direct feedback from real-world users (Deriu et al., 2019), as an evaluation that relies on subjective, first-hand, human user experience.",
"In contrast, automated evaluation provides an objective, quantitative measurement of one or more dimensions of the system from a mathematical perspective (Finch and Choi, 2020).",
"Some metrics used for automated evaluation of the reviewed systems include measures of task performance (Ali et al., 2020) and completion rates (Holmes et al., 2019), response correctness (Ros-ruen and Samanchuen, 2018), and response time (Grover et al., 2009).",
"In Table 7, we observe that nearly half of the papers conducted human evaluations; however, a large percentage (37%) also did not discuss evaluation at all.",
"We further analyzed papers conducting human evaluations and found that they included an average of 26 (mode = 12) participants.",
"More details regarding the human and automated evaluations are provided in Tables 10, 11, and 12 of the appendix.",
"In a follow-up analysis of system usability , defined as the degree to which users are able to engage with a system safely, effectively, efficiently, and enjoyably (Lee et al., 2019), we observed that 33 papers explicitly evaluated the usability of their system.",
"We identify common limitations across many surveyed systems, accompanied by recommendations for addressing them in future work.",
"9.1 Incomplete Exploration of System Design We observed little system-level architectural diversity across the surveyed systems, with most (83%) having a pipeline architecture.",
"This architectural homogeneity limits our understanding of good design practice within this domain.",
"Recent studies demonstrate that end-to-end architectures for task-oriented dialogue systems could compete with pipeline architectures given sufficient high-quality data (Hosseini-Asl et al., 2020; Ham et al., 2020; Bordes et al., 2017; Wen et al., 2016).",
"However, the external knowledge sources often leveraged in end-to-end systems are notoriously complex in many healthcare sub-domains (Campillos-Llanos et al., 2020).",
"Additionally, for healthcare applications interpretability is highly desired (Ham et al., 2020), but explanations are often obfuscated in end-to-end systems (Ham et al., 2020; Wen et al., 2016).",
"Finally, users of these systems may seek guidance on sensitive topics, which can exacerbate privacy concerns (Xu et al., 2021).",
"Any system trained on large, weakly curated datasets may also learn unpleasant behaviors and amplify biases in the training data, in turn producing harmful consequences (Dinan et al., 2021; Bender et al., 2021).",
"We recommend further experimentation with architectural design, in parallel with work towards developing high-quality healthcare dialogue datasets, which to date remain scarce (Farzana et al., 2020).",
"interaction.",
"However, it is well-established that individuals from certain demographic groups are more comfortable conversing with dialogue systems via speech (Tudor Car et al., 2020).",
"Text-based systems may also be more likely to violate privacy considerations (Tudor Car et al., 2020).",
"Thus, we recommend that researchers engage in further exploration of multimodal or spoken dialogue systems when applicable and appropriate.",
"Many of the surveyed systems were also implemented on mobile phones.",
"Although an advantage of mobile-based systems is that they are readily available using a technology familiar to most users, Lee et al. (2018) found that users significantly reduced their usage over time when engaging long-term with mobile health applications.",
"Tudor Car et al. (2020) suggest that one way to overcome this limitation in mobile-based systems is by directly embedding them in applications or platforms with which users already engage habitually (e.g., Facebook Messenger).",
"This more ambient dissemination approach may facilitate easier and more lasting integration of system use in individuals' daily lives.",
"Finally, we identified that most systems (84%) target only patients, with research on systems targeted towards clinicians and caregivers remaining limited.",
"We recommend further exploration of systems targeted towards these critical audiences.",
"This may offer broad, high-impact support in understanding, diagnosing, and treating patients' health issues (Valizadeh et al., 2021; Kaelin et al., 2021).",
"Data accessibility restrictions reduce the capacity of public health research (Strongman et al., 2019), and these limitations may be partially responsible for the imbalance of pipeline versus end-to-end architectures (9.1).",
"Only a small percentage of papers surveyed (29%) ventured to discuss the quantity or characteristics of the data used during system development in any way.",
"A lack of data transparency hinders scientific progress and severely impedes replicability.",
"We call upon researchers to publish data when permissible by governing protocol, and descriptive statistics to the extent allowable when circumstances prevent data release.",
"We also view the development of high-quality, publicly available datasets as an important frontier in translational dialogue systems research (9.1).",
"ods (34%).",
"This prevents the research community from replicating developed systems and generalizing study findings more broadly (Walker et al., 2018).",
"Well-established guidelines exist and are being increasingly enforced within the NLP community to prevent reproducibility issues (Dodge et al., 2019).",
"The disregard of reproducibility best practices observed with many healthcare dialogue systems may be partially attributed to the most common target venues for this work, which may place less emphasis on replication.",
"This validates a central motivator for publishing this surveywithout adequate inclusion of target domain and technical stakeholders in interdisciplinary, translational research, progress will remain constrained.",
"We strongly urge researchers in this domain to provide implementation details in their publications.",
"Real-world medical data facilitates the development of high-quality healthcare applications (Bertino et al., 2005; Di Palo and Parde, 2019; Farzana et al., 2020), but protecting the rights and privacy of contributors to the data is critical for ensuring ethical research conduct (Institute of Medicine, 2009), as is proper treatment of copyright protections.",
"We screened all included papers for coverage of privacy and ethical concerns, and observed that only 27% of the surveyed papers considered participant or patient privacy in the design of their system.",
"Moreover, only 14% of the surveyed papers documented any evidence of Institutional Review Board (or IRB-equivalent) approval.",
"Research involving healthcare dialogue systems is unquestionably human-centered, and as such the absence of ethical oversight in the design of such systems is a grave concern.",
"Although technical researchers entering this space may be unfamiliar with human subjects research and protocol, we urge all dialogue systems researchers to submit their experimental design and protocol for review by an appropriate external review board.",
"We also ask that researchers consider the potential harms from use or misuse of their systems, following guidelines established by the ACM Code of Ethics.",
"8 9.4 Room for Increased Language Diversity We observed that most systems (56%) targeted English speakers.",
"Developing multilingual dialogue systems or systems for speakers of low-resource 8 https://www.acm.org/code-of-ethics languages brings up various challenges (Lpez-Czar Delgado and Araki, 2005), but solving this problem could have have tremendous benefit for individuals in non-English speaking communities with minimal or unreliable healthcare access.",
"The systems developed by Duggenpudi et al. (2019), Rahman et al. (2019), and Grover et al. (2009) provide case examples for how such systems may be implemented.",
"We also note that while troubling, a 56% share of systems targeted towards English speakers is consistent with linguistic homogeneity in the field in general, and actually slightly low relative to many other NLP tasks (Mielke, 2016; Bender, 2009).",
"Healthcare dialogue systems may on some level offer a case example for how applications originally designed for high-resource (i.e., English-language) settings can be adapted and re-engineered to provide better coverage of the diverse, real-world potential user base.",
"Finally, more than 50% (37/70) of the included papers did not evaluate system usability or general user experience.",
"Usability testing can improve productivity and safeguard against errors (Rogers et al., 2005), both of which are critical in healthcare tasks.",
"Therefore, we urge the research community to consider and assess usability when designing for this domain.",
"The systems among those surveyed that do this already (e.g., those developed by Wang et al. (2020), Lee et al. (2020b), Wei et al. (2018), or Demasi et al. (2020)) provide case examples for how it might be done.",
"Almost 60% of the surveyed systems were not explicitly designed to engage users, despite this being a common objective in the general domain (Ghazarian et al., 2019).",
"Healthcare dialogue systems may stand to benefit particularly well from such measures (Parde, 2018), since patient engagement is predictive of adoption and adherence to healthcare outcomes (Montenegro et al., 2019).",
"To increase user satisfaction and system performance, we recommend that the research community more purposefully consider engagement when designing their healthcare-oriented dialogue systems.",
"In this work, we conducted a systematic technical survey of task-oriented dialogue systems used for health-related purposes, providing much-needed",
"analyses from a computational perspective and narrowing the translational gap between basic and applied dialogue systems research.",
"We comprehensively searched through 4070 papers in computer science, NLP, and AI databases, finding 70 papers that satisfied our inclusion criteria.",
"We analyzed these papers based on numerous technical factors including the domain of research, system objective, target audience, language, system architecture, system design, training dataset, and evaluation methods.",
"Following this, we identified and summarized gaps in this existing body of work, including an incomplete exploration of system design, replicability concerns, potential ethical and privacy issues, room for increased language diversity, and minimal investigation of usability or user engagement.",
"Finally, we presented evidence-based recommendations stemming from our findings as a launching point for future work.",
"It is our hope that interested researchers find the information provided in this survey to be a unique and helpful resource for developing task-oriented dialogue systems for healthcare applications.",
"Beyond the concrete changes suggested during the discussion, it is important to consider the broader ethical implications of task-oriented dialogue systems in healthcare settings.",
"Although the goal of such systems may not be to replace human healthcare providers, it is likely that deployed systems would support clinicians, defraying workload for overburdened individuals.",
"In doing so, these systems may have significant impact on healthcare decision-making.",
"Machines are imperfect, and thus a possible harm is that these systems may misinterpret user input or make incorrect predictions a mistake that in high-stakes healthcare settings could prove detrimental or even dangerous.",
"Researchers and developers should be cognizant of possible harms stemming from the use and misuse of task-oriented dialogue systems for healthcare settings, and should implement both automated (e.g., strict thresholds for diagnostic suggestions) and human (e.g., training to ensure staff awareness of potential system fallibilities) safeguards.",
"Moreover, a potential benefit of these systems is their potential to meaningfully and beneficially extend healthcare access to underserved populations.",
"As such, it is important to ensure that automated systems do not fall prey to the same biases often observed among human healthcare providers (FitzGerald and Hurst, 2017).",
"Systems trained to perform healthcare tasks using datasets that are not representative of the target population may exhibit poorer performance with users who already experience marginalization or are otherwise vulnerable, impeding or even reversing benefits.",
"We call upon researchers to examine, debias, and curate their training data such that task-oriented dialogue systems for healthcare applications elevate, rather than diminish, outcomes for the historically underserved users which they are best poised to benefit.",
"This material is based upon work supported by the National Science Foundation under Grant No. 2125411, and by a start-up grant from the University of Illinois at Chicago.",
"Any opinions, findings, and conclusions or recommendations are those of the authors and do not necessarily reflect the views of the National Science Foundation.",
"We thank the anonymous reviewers for their insightful suggestions, which further strengthened this work."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"method",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Pretrained neural models such as BERT, when fine-tuned to perform natural language inference (NLI), often show high accuracy on standard datasets, but display a surprising lack of sensitivity to word order on controlled challenge sets.",
"We hypothesize that this issue is not primarily caused by the pretrained model's limitations, but rather by the paucity of crowd-sourced NLI examples that might convey the importance of syntactic structure at the fine-tuning stage.",
"We explore several methods to augment standard training sets with syntactically informative examples, generated by applying syntactic transformations to sentences from the MNLI corpus.",
"The best-performing augmentation method, subject/object inversion, improved BERT's accuracy on controlled examples that diagnose sensitivity to word order from 0 .",
"28 to 0 .",
"73 , without affecting performance on the MNLI test set.",
"This improvement generalized beyond the particular construction used for data augmentation, suggesting that augmentation causes BERT to recruit abstract syntactic representations.",
"In the supervised learning paradigm common in NLP, a large collection of labeled examples of a particular classification task is randomly split into a training set and a test set.",
"The system is trained on this training set, and is then evaluated on the test set.",
"Neural networksin particular systems pretrained on a word prediction objective, such as ELMo (Peters et al., 2018) or BERT (Devlin et al., 2019)excel in this paradigm: with large enough pretraining corpora, these models match or even exceed the accuracy of untrained human annotators on many test sets (Raffel et al., 2019).",
"At the same time, there is mounting evidence that high accuracy on a test set drawn from the same distribution as the training set does not indicate that the model has mastered the task .",
"This discrepancy can manifest as a sharp drop in accuracy when the model is applied to a different dataset that illustrates the same task (Talmor and Berant, 2019; Yogatama et al., 2019), or as excessive sensitivity to linguistically irrelevant perturbations of the input (Jia and Liang, 2017; Wallace et al., 2019).",
"One such discrepancy, where strong performance on a standard test set did not correspond to mastery of the task as a human would define it, was documented by McCoy et al. (2019b) for the Natural Language Inference (NLI) task.",
"In this task, the system is given two sentences, and is expected to determine whether one (the premise) entails the other (the hypothesis).",
"Most if not all humans would agree that NLI requires sensitivity to syntactic structure; for example, the following sentences do not entail each other, even though they contain the same words: (1) The lawyer saw the actor.",
"(2) The actor saw the lawyer.",
"McCoy et al. constructed the HANS challenge set, which includes examples of a range of such constructions, and used it to show that, when BERT is fine-tuned on the MNLI corpus (Williams et al., 2018), the fine-tuned model achieves high accuracy on the test set drawn from that corpus, yet displays little sensitivity to syntax; the model wrongly concluded, for example, that (1) entails (2).",
"We consider two explanations as to why BERT fine-tuned on MNLI fails on HANS.",
"Under the Representational Inadequacy Hypothesis , BERT fails on HANS because its pretrained representations are missing some necessary syntactic information.",
"Under the Missed Connection Hypothesis , BERT extracts the relevant syntactic information from the input (cf. Goldberg 2019; Tenney et al. 2019), but it fails to use this information with HANS because there are few MNLI training examples that indicate how syntax should support NLI (McCoy et al., 2019b).",
"It is possible for both hypotheses to be correct: there may be some aspects of syntax that BERT has not learned at all, and other aspects that have been learned, but are not applied to perform inference.",
"The Missed Connection Hypothesis predicts that augmenting the training set with a small number of examples from one syntactic construction would teach BERT that the task requires it to use its syntactic representations.",
"This would not only cause improvements on the construction used for augmentation, but would also lead to generalization to other constructions.",
"In contrast, the Representational Inadequacy Hypothesis predicts that to perform better on HANS, BERT must be taught how each syntactic construction affects NLI from scratch.",
"This predicts that larger augmentation sets will be required for adequate performance and that there will be little generalization across constructions.",
"This paper aims to test these hypotheses.",
"We constructed augmentation sets by applying syntactic transformations to a small number of examples from MNLI.",
"Accuracy on syntactically challenging cases improved dramatically as a result of augmenting MNLI with only about 400 examples in which the subject and the object were swapped (about 0 . 1% of the size of the MNLI training set).",
"Crucially, even though only a single transformation was used in augmentation, accuracy increased on a range of constructions.",
"For example, BERT's accuracy on examples involving relative clauses (e.g, The actors called the banker who the tourists saw 9 The banker called the tourists ) was 0 .",
"33 without augmentation, and 0 .",
"83 with it.",
"This suggests that our method does not overfit to one construction, but taps into BERT's existing syntactic representations, providing support for the Missed Connection Hypothesis.",
"At the same time, we also observe limits to generalization, supporting the Representational Inadequacy Hypothesis in those cases.",
"HANS is a template-generated challenge set designed to test whether NLI models have adopted three syntactic heuristics.",
"First, the lexical overlap heuristic is the assumption that any time all of the words in the hypothesis are also in the premise, the label should be entailment .",
"In the MNLI training set, this heuristic often makes correct predictions, and almost never makes incorrect predictions.",
"This may be due to the process by which MNLI was generated: crowdworkers were given a premise and were asked to generate a sentence that contradicts or entails the premise.",
"To minimize effort, workers may have overused lexical overlap as a shortcut to generating entailed hypotheses.",
"Of course, the lexical overlap heuristic is not a generally valid inference strategy, and it fails on many HANS examples; e.g., as discussed above, the lawyer saw the actor does not entail the actor saw the lawyer .",
"HANS also includes cases that are diagnostic of the subsequence heuristic (assume that a premise entails any hypothesis which is a contiguous subsequence of it) and the constituent heuristic (as-sume that a premise entails all of its constituents).",
"While we focus on counteracting the lexical overlap heuristic, we will also test for generalization to the other heuristics, which can be seen as particularly challenging cases of lexical overlap.",
"Examples of all constructions used to diagnose the three heuristics are given in Tables A.5, A.6 and A.7.",
"Data augmentation is often employed to increase robustness in vision (Perez and Wang, 2017) and language (Belinkov and Bisk, 2018; Wei and Zou, 2019), including in NLI (Minervini and Riedel, 2018; Yanaka et al., 2019).",
"In many cases, augmentation with one kind of example improves accuracy on that particular case, but does not generalize to other cases, suggesting that models overfit to the augmentation set (Jia and Liang, 2017; Ribeiro et al., 2018; Iyyer et al., 2018; Liu et al., 2019).",
"In particular, McCoy et al. (2019b) found that augmentation with HANS examples generalized to a different word overlap challenge set (Dasgupta et al., 2018), but only for examples similar in length to HANS examples.",
"We mitigate such overfitting to superficial properties by generating a diverse set of corpus-based examples, which differ from the challenge set both lexically and syntactically.",
"Finally, Kim et al. (2018) used a similar augmentation approach to ours but did not study generalization to types of examples not in the augmentation set.",
"We generate augmentation examples from MNLI using two syntactic transformations: INVERSION , which swaps the subject and object of the source sentence, and PASSIVIZATION .",
"For each of these transformations, we had two families of augmenta-Original MNLI example: There are 16 El Grecos in this small collection.",
"tion sets.",
"The ORIGINAL PREMISE strategy keeps the original MNLI premise and transforms the hypothesis; and TRANSFORMED HYPOTHESIS uses the original MNLI hypothesis as the new premise, and the transformed hypothesis as the new hypothesis (see Table 1 for examples, and A.2 for de-tails).",
"We experimented with three augmentation set sizes: small ( 101 examples), medium ( 405 ) and large ( 1215 ).",
"All augmentation sets were much smaller than the MNLI training set ( 297 k ).",
"1 We did not attempt to ensure the naturalness of the generated examples; e.g., in the INVERSION transformation, The carriage made a lot of noise was transformed into A lot of noise made the carriage .",
"In addition, the labels of the augmentation dataset were somewhat noisy; e.g., we assumed that INVERSION changed the correct label from entailment to neutral , but this is not necessarily the case (if The buyer met the seller , it is likely that The seller met the buyer ).",
"As we show below, this noise did not hurt accuracy on MNLI.",
"Finally, we included a random shuffling condition, in which an MNLI premise and its hypothesis were both randomly shuffled, with a random label.",
"We used this condition to test whether a syntactically uninformed method could teach the model that, when word order is ignored, no reliable inferences can be made.",
"We added each augmentation set separately to the MNLI training set, and fine-tuned BERT on each resulting training set.",
"Further fine-tuning details are in Appendix A.1.",
"We repeated this process for five random seeds for each combination of augmentation strategy and augmentation set size, except for the most successful strategy ( INVERSION + TRANSFORMED HYPOTHESIS ), for which we had 15 runs for each augmentation size.",
"Following McCoy et al. (2019b), when evaluating on HANS, we merged the neutral and contradiction labels produced by the model into a single non-entailment label.",
"For both ORIGINAL PREMISE and TRANSFORMED HYPOTHESIS , we experimented with using each of the transformations separately, and with a combined dataset including both inversion and passivization.",
"We also ran separate experiments with only the passivization examples with an entailment label, and with only the passivization examples with a non-entailment label.",
"As a baseline, we used 100 runs of BERT fine-tuned on the unaugmented MNLI (McCoy et al., 2019a).",
"We report the models' accuracy on HANS, as well as on the MNLI development set (MNLI test set labels are not publicly available).",
"We did not tune any parameters on this development set.",
"All of the comparisons we discuss below are significant at the p < 0 .",
"01 level (based on two-sided t-tests).",
"Accuracy on MNLI was very similar across augmentation strategies and matched that of the unaugmented baseline ( 0 . 84 ), suggesting that syntactic augmentation with up to 1215 examples does not harm overall performance on the dataset.",
"By contrast, accuracy on HANS varied significantly, with most models performing worse than chance (which is 0 . 50 on HANS) on non-entailment examples, suggesting that they adopted the heuristics (Fig-ure 1).",
"The most effective augmentation strategy, by a large margin, was inversion with a transformed hypothesis.",
"Accuracy on the HANS word overlap cases for which the correct label is non-entailment e.g., the doctor saw the lawyer 9 the lawyer saw the doctor was 0 .",
"28 without augmentation, and 0 .",
"73 with the large version of this augmentation set.",
"Simultaneously, this strategy decreased BERT's accuracy on the cases where the heuristic makes the correct prediction ( The tourists by the actor called the authors The tourists called the authors ); in Original premise Transformed hypothesis P a ss i v i z a t i on I n v e r s i on C o m b i ned 0 101 405 1215 0 101 405 1215 0% 50% 100% 0% 50% 100% 0% 50% 100% Number of augmentation examples A cc u r a cy on HANS ( l e x i c a l o v e r l ap c a s e s on l y ) The lexical overlap heuristic makes... A correct prediction An incorrect prediction Figure 1: Comparison of syntactic augmentation strategies.",
"fact, the best model's accuracy was similar across cases where lexical overlap made correct and incorrect predictions, suggesting that this intervention prevented the model from adopting the heuristic.",
"The random shuffling method did not improve over the unaugmented baseline, suggesting that syntactically-informed transformations are essential (Table A.2).",
"Passivization yielded a much smaller benefit than inversion, perhaps due to the presence of overt markers such as the word by , which may lead the model to attend to word order only when those are present.",
"Intriguingly, even on the passive examples in HANS, inversion was more effective than passivization (large inversion augmentation: 0 . 13 ; large passivization augmentation: 0 . 01 ).",
"Finally, inversion on its own was more effective than the combination of inversion and passivization.",
"We now analyze in more detail the most effective strategy, inversion with a transformed hypothesis.",
"First, this strategy is similar on an abstract level to the HANS subject/object swap category, but the two differ in vocabulary and some syntactic properties; despite these differences, performance on this HANS category was perfect ( 1 . 00 ) with medium and large augmentation, indicating that BERT ben-efited from the high-level syntactic structure of the transformation.",
"For the small augmentation set, accuracy on this category was 0 .",
"53 , suggesting that 101 examples are insufficient to teach BERT that subjects and objects cannot be freely swapped.",
"Conversely, tripling the augmentation size from medium to large had a moderate and inconsistent effect across HANS subcases (see Appendix A.3 for case-by-case results); for clearer insight about the role of augmentation size, it may be necessary to sample this parameter more densely.",
"Although inversion was the only transformation in this augmentation set, performance also improved dramatically on constructions other than subject/object swap (Figure 2); for example, the models handled examples involving a prepositional phrase better, concluding, for instance, that The judge behind the manager saw the doctors does not entail The doctors saw the manager (unaugmented: 0 . 41 ; large augmentation: 0 . 89 ).",
"There was a much more moderate, but still significant, improvement on the cases targeting the subsequence heuristic; this smaller degree of improvement suggests that contiguous subsequences are treated separately from lexical overlap more generally.",
"One exception was accuracy on NP/S inferences, such as the managers heard the secretary resigned 9 The managers heard the secretary , which improved dramatically from 0 .",
"02 (unaugmented) to 0 .",
"50 (large augmentation).",
"Further improvements for subsequence cases may therefore require augmentation with examples involving subsequences.",
"A range of techniques have been proposed over the past year for improving performance on HANS.",
"These include syntax-aware models (Moradshahi et al., 2019; Pang et al., 2019), auxiliary models designed to capture pre-defined shallow heuristics so that the main model can focus on robust strategies Lexical overlap heuristic Subsequence heuristic Constituent heuristic 0 101 405 1215 0 101 405 1215 0 101 405 1215 0% 25% 50% 75% 100% Number of augmentation examples A cc u r a cy on HANS The heuristic makes... A correct prediction An incorrect prediction Figure 2: Augmentation using subject/object inversion with a transformed hypothesis.",
"(Clark et al., 2019; He et al., 2019; Mahabadi and Henderson, 2019), and methods to up-weight diffi-cult training examples (Yaghoobzadeh et al., 2019).",
"While some of these approaches yield higher accuracy on HANS than ours, including better generalization to the constituent and subsequence cases (see Table A.4), they are not directly comparable: our goal is to assess how the prevalence of syntactically challenging examples in the training set affects BERT's NLI performance, without modifying either the model or the training procedure.",
"Our best-performing strategy involved augmenting the MNLI training set with a small number of instances generated by applying the subject/object inversion transformation to MNLI examples.",
"This yielded considerable generalization: both to an-other domain (the HANS challenge set), and, more importantly, to additional constructions, such as relative clauses and prepositional phrases.",
"This supports the Missed Connection Hypothesis: a small amount of augmentation with one construction induced abstract syntactic sensitivity, instead of just inoculating the model against failing on the challenge set by providing it with a sample of cases from the same distribution (Liu et al., 2019).",
"At the same time, the inversion transformation did not completely counteract the heuristic; in particular, the models showed poor performance on passive sentences.",
"For these constructions, then, BERT's pretraining may not yield strong syntactic representations that can be tapped into with a small nudge from augmentation; in other words, this may be a case where our Representational Inadequacy Hypothesis holds.",
"This hypothesis predicts that pretrained BERT, as a word prediction model, struggles with passives, and may need to learn the properties of this construction specifically for the NLI task; this would likely require a much larger number of augmentation examples.",
"The best-performing augmentation strategy involved generating premise/hypothesis pairs from a single source sentencemeaning that this strategy does not rely on an NLI corpus.",
"The fact that we can generate augmentation examples from any corpus makes it possible to test if very large augmentation sets are effective (with the caveat, of course, that augmentation sentences from a different domain may hurt performance on MNLI itself).",
"Ultimately, it would be desirable to have a model with a strong inductive bias for using syntax across language understanding tasks, even when overlap heuristics lead to high accuracy on the training set; indeed, it is hard to imagine that a human would ignore syntax entirely when understanding a sentence.",
"An alternative would be to create training sets that adequately represent a diverse range of linguistic phenomena; crowdworkers' (rational) preferences for using the simplest generation strategies possible could be counteracted by approaches such as adversarial filtering (Nie et al., 2019).",
"In the interim, however, we conclude that data augmentation is a simple and effective strategy to mitigate known inference heuristics in models such as BERT.",
"This research was supported by a gift from Google, NSF Graduate Research Fellowship No. 1746891, and NSF Grant No.",
"BCS-1920924.",
"Our experiments were conducted using the Maryland Advanced Research Computing Center (MARCC)."
] | [
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"This paper introduces the Webis Gmane Email Corpus 2019, the largest publicly available and fully preprocessed email corpus to date.",
"We crawled more than 153 million emails from 14,699 mailing lists and segmented them into semantically consistent components using a new neural segmentation model.",
"With 96% accuracy on 15 classes of email segments, our model achieves state-of-the-art performance while being more efficient to train than previous ones.",
"All data, code, and trained models are made freely available alongside the paper.",
"1 1 Introduction Email is perhaps the most reliable and ubiquitous means of digital communication.",
"Notwithstanding the mainstream adoption of social media for private communication as of about 2010, email prevails unrivaled for workplace communication and beyond.",
"Compared to social media, however, emails have attracted much less research attention in the fields of computational linguistics, natural language processing, and information retrieval.",
"Key reasons for the neglect can be found in the presumed diffi-culty of obtaining emails at scale, the lack of open technologies to parse them, and that, despite their importance, they are hardly considered en vogue .",
"Although mailing lists as a rich and accessible source for emails have been tapped before, this has never been done at scale.",
"Our contributions in this respect are (1) the Webis Gmane Email Crawl 2019, a crawl of more than 153 million emails from a wide range of mailing lists, (2) the Chipmunk email segmenter, a newly developed end-to-end neural model, and (3) the complete preprocessing of the crawled emails using our model to construct the largest corpus of ready-to-use emails to date.",
"Our corpus encompasses more than 20 years worth of discussions on a diverse set of topics, including important political and societal issues.",
"1",
"https://webis.de/publications.html?q=ACL+2020 We believe that providing the research community with access to clean and preprocessed communication data from emails will foster open research in several areas, such as the analysis of dialogs and discourse, stylometry, language evolution, argument mining, as well as information retrieval, and the synthesis of conversations and argumentation.",
"For research purposes, the three primary sources of email data are public mailing lists and newsgroups, volunteered or leaked private email datasets, and email databases at companies and service providers.",
"The WestburyLab USENET corpus (Shaoul and Westbury, 2009, 2013) was crawled between 2005 and 2011.",
"More widely employed has been the 20 newsgroups corpus (Lang, 1995).",
"The W3C corpus compiles the public W3C mailing lists (Wu, 2005), Jiang et al. (2013) examined 8 years of patch submissions to the Linux Kernel Mailing List, and Niedermayer et al. (2017) inspected the process of standardization across IETF bodies via its mailing lists.",
"The CSpace corpus consists of 15,000 student dialogs volunteered for research during a management course at CMU (Kraut et al., 2004).",
"All of the above have been extensively analyzed (Minkov et al., 2005, 2006; Lawson et al., 2010), yet the most widely studied corpus remains the leaked Enron corpus (Klimt and Yang, 2004), built as part of the U.S. FERC's investigation into the Enron Corporation.",
"It has been subject to studies on speech act and dialog analysis (Gold-stein et al., 2006), named entities (Lawson et al., 2010), and word usage patterns (Keila and Skil-licorn, 2005), among many others.",
"Another recently leaked dataset comprises the Clinton emails that surfaced during the 2016 U.S. presidential election (De Felice and Garretson, 2018).",
"Regarding email data at companies and service providers, not many researchers are able to disclose their datasets (Avigdor-Elgrabli et al., 2018).",
"Regardless of their source, emails are usually unstructured and difficult to process even for human readers (Sobotta, 2016).",
"Thus, many approaches have been proposed for cleansing newsgroup and email data.",
"As one of the earliest, de Carvalho and Cohen (2004) developed a specialized method for detecting and removing signatures based on typical text indicators.",
"Tang et al. (2005) developed a high-accuracy model for detecting blocks of noncontent in emails using a mixture of SVM models and hard-coded rules.",
"An unsupervised approach was employed by Contractor et al. (2010), who applied a noisy channel model for filtering out noncontent.",
"Similarly, Bettenburg et al. (2011) used spell checking techniques for uncovering technical artifacts like source code, disentangling them from the main content.",
"A more general approach, befittingly named Zebra , was published by Lampert et al. (2009), who split messages into a series of structural and semantic zones, such as author text and signature .",
"Finally, Repke and Krestel (2018) developed Quagga , the first neural end-to-end model inspired by Lampert et",
"al.'s Zebra, which showed very substantial performance improvements.",
"Most machine learning-based approaches rely on classifying lines of text, either by detecting the start and the end of structural blocks with specialized models, or by assessing each line individually via its surrounding context.",
"With the increase in machine-generated emails, recent studies have shifted their focus away from dialogs and towards parsing and categorizing (Ab-erdeen et al., 2010; Zhang et al., 2017) or threading notifications (Ailon et al., 2013), as well as automated template induction (Proskurnia et al., 2017; Castro et al., 2018; Kocayusufoglu et al., 2019).",
"Our dataset was crawled from Gmane, 2 a popular email-to-newsgroup gateway, which allows users to subscribe to mailing lists via the NNTP newsgroup protocol that formed the basis for the Usenet.",
"While Gmane's web portal has been offline for years and was recently replaced by a minimal web-site under a new domain name, the newsgroup portal is still alive and messages from active mailing lists arrive every day.",
"Unlike a mailing list server, a newsgroup server keeps an archive of messages, allowing a user to download the history of a newsgroup even if they did not participate in it from 2 https://news.gmane.io or rather: nntp://news.gmane.io the beginning.",
"Traditional newsgroup servers often have a limited retention period, though fortunately, Gmane archived all messages since its launch in 2002.",
"About a million messages date back even further to the year 2000 and a small number even to the early 90's.",
"The latest message in our corpus is from mid-May 2019, which is when we stopped crawling.",
"Considering this enormous time span and the uncertain future of Gmane, we see archiving these messages as both a great research opportunity and an attempt at preserving our digital heritage.",
"Following the style of the Usenet, Gmane groups are ordered in a hierarchy of subjects under the common gmane root.",
"This hierarchy makes it easy to categorize mailing lists into topical domains giving a rough overview of what is being talked about.",
"The majority of groups is of a generally technical nature (e.g., in gmane.comp or gmane.linux ), a large number of other categories exists, most notably culture , politics , science , education , music , games , and recreation .",
"Below these main categories, a plethora of individual subjects are found.",
"A cursory topic modeling study reveals not only software development discussions, but also debates about environmental issues, climate change, gender equality, mobility, health, business, international conflicts, general political concerns, philosophy, religious beliefs, and many more.",
"We crawled all 14,699 groups of which 64 turned out empty.",
"Gmane provides another 18,450 groups under the gwene hierarchy for headlines and snippets from RSS feeds.",
"We crawled those as well, but have not analyzed nor added them to the dataset.",
"The crawling process ran slowly over a period of months, producing 604 GiB of compressed WARC files.",
"The total number of messages across all groups sums up to 153,310,330 usable mails.",
"The largest individual group is the Linux Kernel Mailing List with 2.4 million messages followed by the KDE bug tracking list with 2 million.",
"Excluding any obvious bug tracking or software patch submission lists, 113 million messages remain.",
"Further excluding the largest hierarchies comp , linux , and os , 24 million messages are left, which boil down to 7.8 million when restricted to the seven exemplary hierarchies mentioned above.",
"6.4 million of these are English-language, the rest is mostly German, French, and Spanish.",
"The 153 million messages were posted by 6.4 million unique sender addresses and the influx volume amounts to over 710,000 messages per month.",
"This number is a bit lower at 610,000 when only considering the past five years.",
"The top 10 groups account for an average of 1.2 million messages each and the top 10,000 groups for 15,250, while the bottom 5,000 groups have on average 100 messages.",
"Emails are a noisy data source in need of heavy preprocessing.",
"The Usenet and early-day mailing lists developed (n)etiquettes for how to write proper messages.",
"These included quoting as little as possible, replying inline, separating signatures by two hyphens, and restricting their length to four lines.",
"Emailthe more recent in particularobeys none of those.",
"For the most part, messages consist of large blocks of nested quotationsoften mutilated by the 78-character limit, various formats for introducing quotations, exuberant unstructured personal signatures, and automated signatures added by the author's user agent or the mailing list server.",
"Moreover, technical emails often contain fragments of source code, log data, or diffs.",
"Automated emails also contain semi-structured templates like ASCII-formatted tables.",
"Extracting the content of such unstructured messages proves difficult and long threads pose a challenge even to human readers.",
"We started the preprocessing by parsing the MIME contents into pure plaintext.",
"To preserve the privacy of users, the name parts of email addresses were replaced with a 16-byte base64 prefix of the address's SHA-256 hashes with @example.com appended as the authority part.",
"Headers were reduced to the set necessary for retaining date-time, subject, thread, sender, and recipient information.",
"Finally, the contents of each email were segmented and annotated using our model described in Section 4, allowing for easy extraction of not only the main content, but also other structured information.",
"The final corpus is packaged as compressed line-based JSON files that can be easily indexed into Elastic-search using its bulk API.",
"Cleansing email plaintexts is laborious and first requires splitting them into different functional and semantic segments (also sometimes called zones).",
"Our first attempt at this was a re-implementation of the classic approach by Tang et al.",
"Despite our best efforts, its handcrafted feature set, and the need to train two individual SVMs for each type of content block caused generalizability and scala-bility issues on our much larger and more diverse dataset.",
"Also, a context window of three lines was not nearly enough to reliably identify all types of content blocks, and making the window larger did not yield satisfying results due to the simplicity and the lack of shared weights among the individual models.",
"We also needed a much more fine-grained segmentation, which not even the more recent neural approach by Repke and Krestel could deliver without substantial changes, so it was decided to develop a new email segmenter.",
"We identified 15 common segments recurring in emails: (1) paragraphs (main content), (2) salutations , (3) closings , (4) quotations , (5) quotation markers (quotation author and date), (6) inline email headers , (7) personal signatures , (8) automated MUA signatures (i.e., mail user agent, but also mailing list details or advertising), (9) source code , (10) source code diffs , (11) log data , and (12) technical noise (e.g., inline attachments or PGP signatures), (13) semi-structured tabular data , (14) ornaments (e.g., separator lines), and (15) structural section headings (e.g., in a call for pa-pers).",
"We annotated segments in a stratified sample of 3,033 emails from a range of different groups, totaling 170,309 line annotations.",
"Annotated segments are mostly unambiguous so that a single annotator can produce consistent and high-quality annotations in multiple correction passes.",
"Although the sample is technically multilingual, most emails are in English.",
"Of the 3,033 emails, we set aside 300 for model validation and extracted another sample of 1.5 million emails and concatenated them to a single file of 80 million lines (2.8 GiB).",
"Here we replaced all email addresses with the token @EMAIL@ , all URLs with @URL@ , mapped num-bers to the digit 0, replaced all hexadecimal values with @HASH@ , runs of four or more indenting spaces with @INDENT@ , split words on special characters (mainly for tokenizing quotations and source code), and normalized Unicode characters to NFKC.",
"We used this processed dump to train a fastText embedding (Grave et al., 2017) with a default vector dimension of 100.",
"The segmentation model has a hybrid RNN-CNN architecture as depicted in Figure 1.",
"For each line, we define a context window of c = 4 lines before Line Embedding(n, 100) Context Embedding (2c + 1, n, 100) Bi-GRU Encoder (n, 128) Convolution 128 (4,",
"and after the current line and build an embedding matrix of dimensions (2 c + 1 , n, 100) , n being the maximum word token count per line.",
"Longer lines are truncated by discarding tokens between the first 75% and the last 25% of the line preserving both line beginnings and endings with preference to beginnings, where more structural markers are found under left-to-right writing.",
"Shorter lines and the top or the bottom of the context matrix are padded if required.",
"We feed the line embeddings into separate 128-unit Bi-GRU encoders and the context matrix into a 2D CNN.",
"The idea is that, unlike normal text, plaintext emails have a spatial layout where the horizontal and the vertical axis both convey structural information (most importantly the first column).",
"The CNN performs 128 convolutions with a filter size of 4 4 , then another 128 convolutions with a filter size of 3 3 , and finally a max pooling of 2 2 .",
"After either of the Bi-GRUs and the first convolution, we add in a batch normalization.",
"The CNN output is fed into a 128-dimensional dense layer, concatenated with the other outputs, and then regularized with a dropout of 0.25 before being passed to the softmax layer with outputs for the 15 segment labels and <empty> for blank lines.",
"All layers have ReLU as their activation function.",
"We train the model using a mini-batch size of 128 and the Adam optimizer with hinge loss.",
"Choosing this over crossentropy is a decent trade-off between accuracy and generalizability.",
"While crossentropy tends to find a closer fit, giving higher accuracy on very similar data, this comes at the expense of uncertain decisions and early overfitting.",
"Hinge loss prefers larger margins, generalizing better to new and entirely unseen data in a line-wise classification scenario with strict block boundaries.",
"To evaluate our model, we compare it with two others from the literature in two different settings.",
"Table 1 compiles an overview of the evaluation results.",
"A confusion matrix for our model is found in Table 2 in the appendix.",
"Our model achieves 96% accuracy over all classes.",
"Mapped to binary decisions between paragraphs and non-paragraphs, the accuracy goes up to 98%.",
"The recall on the paragraph class is 93% (see Table 2).",
"The majority class are quotations with 33%, followed by patches with 16%.",
"Paragraphs come in at 11%.",
"Note that the patch class is overrepresented not because we sampled primarily patch emails, but because patches tend to be longer than normal emails.",
"Still, we achieve an overall high accuracy on all classes.",
"A typical segmentation is provided as an example in Figure 3.",
"To test the model's ability to generalize to unseen data, we annotated 300 emails from the Enron corpus, whose class distribution differs significantly from mailing lists: The emails are much shorter and most lines belong to paragraphs (36%) or empty lines (26%).",
"Quotations account for 8% and code or patches are non-existent.",
"Though significantly lower, our model still shows an acceptable accuracy of about 88%.",
"The excessive use of inline headers containing multiple lines of forwarding addresses appears to be the main challenge for our model, which is expected considering that forwarding emails to dozens of recipients is rare on mailing lists.",
"Furthermore, the proprietary Enron mail user agent had an unusual forwarding and quotation style quite unlike the more common Thunderbird, GMail, or Outlook notations.",
"Finally, we compared our model against Quagga , the state-of-the-art neural segmentation model by Repke and Krestel and a re-implementation of Tang et",
"al.'s SVM email cleaning approach.",
"Unfortunately, a training routine was missing from Quagga's source code, so we re-implemented this part as closely to the original as possible with one notable exception.",
"We changed the way the model handles quotations.",
"The original model did not have a quotation class and was instead trained to ignore quotation indicators so as to predict normal content segments within quotations also.",
"This is very different from how our model handles quotations and it renders the reconstruction of a conversation from the segments alone impossible.",
"We prefer our approach to classify quotations as a separate segment, which retains the structure of emails and one can simply strip the quotation indicators and then apply the model recursively.",
"We trained our own Quagga on all 16 classes for 20 epochs (the model started overfitting after more epochs).",
"Although the original model was trained and tested on only five classes, the extended and retrained model performs only slightly worse than ours with 94% accuracy overall and very similar scores for most of the frequent classes.",
"The degradation on the Enron corpus appears to be worse than in our model (with the exception of the log data class).",
"In conclusion, we can say that both models perform equally well, though our model achieves overall better generalization.",
"In terms of training speed, we found our approach to be faster and more efficient, since it relies on a 2D context window instead of a vertical RNN for sequences of lines.",
"The model by Tang et al. required a great deal of feature engineering and the training of many separate models.",
"For simplicity, and in accordance with the original paper, we mapped all labels to the reduced set of content , quotation , header , signature , code ( patch ), and <empty> .",
"Despite the smaller number of classes, the model's accuracy lags behind the neural models with 80% on Gmane and only 72% on the Enron corpus.",
"The distribution of email data raises ethical concerns, such as possible violations of privacy and legal requirements, which we addressed to the best of our ability.",
"All emails in our corpus are from public mailing lists and by policy, Gmane only accepts such lists whose users are comfortable with Gmane Corpus Enron Corpus Ours Quagga Tang Ours Quagga Tang All Classes 0.96 0.94 0.80 0.88 0.83 0.72 Quotation 0.99 0.99 0.99 0.99 0.88 0.85 Patch 0.95 0.95 0.46 Paragraph 0.93 0.90 0.90 0.95 0.91 0.89 Log Data 0.84 0.77 0.24 0.74 MUA Sig.",
"their emails being publicly readable.",
"At the time of writing, the original messages in our corpus are openly available to anyone through the NNTP interface and other mailing list archives.",
"Nevertheless, we took measures to avoid abuse of the readily parsed and compiled form of the data, one being the aforementioned anonymization of email addresses to inhibit trivial mass harvesting.",
"Furthermore, we enforce a strict release policy in compliance with the GDPR academic exemptions.",
"Access to the data is granted solely to researchers and academic institutions and we prohibit further distribution for non-academic purposes.",
"This paper contributes the largest email corpus to date.",
"The corpus is targeted mainly at discussion and dialog-based research in NLP.",
"We gave an overview of the topics discussed in the corpus, demonstrating that it is a valuable source for several NLP tasks, such as argument mining.",
"Despite the prevalence of technical conversations, various important and controversial societal issues are covered in the corpus as well.",
"To minimize user overhead, we developed a new neural model for segmenting emails with high precision and recall, which achieves state-of-the-art performance, allowing for fine-grained extraction of structural elements from emails.",
"All the resources developed in this paper are freely available.",
"3 3 Visit",
"https://webis.de/data.html?q=Webis-Gmane-19 for details about gaining access to the corpus.",
"The pre-trained Chipmunk model as well as the code we used for training it and for conducting our experiments are hosted at GitHub (https://github.com/webis-de/ACL-20)."
] | [
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Unsupervised document representation learning is an important task providing pre-trained features for NLP applications.",
"Unlike most previous work which learn the embedding based on self-prediction of the surface of text, we explicitly exploit the inter-document information and directly model the relations of documents in embedding space with a discriminative network and a novel objective.",
"Extensive experiments on both small and large public datasets show the competitiveness of the proposed method.",
"In evaluations on standard document classification, our model has errors that are relatively 5 to 13% lower than state-of-the-art unsupervised embedding models.",
"The reduction in error is even more pronounced in scarce label setting.",
"Rapid advance in deep methods for natural language processing has contributed to a growing need for vector representation of documents as input features.",
"Applications for such vector representations include machine translation (Sutskever et al., 2014), text classification (Dai and Le, 2015), image captioning (Mao et al., 2015), multi-lingual document matching (Pham et al., 2015), question answering (Rajpurkar et al., 2016), and more.",
"This work studies unsupervised training for encoders that can efficiently encode long paragraph of text into compact vectors to be used as pre-trained features.",
"Existing solutions are mostly based on the assumption that a good document embedding can be learned through modeling the intra -document information by predicting the occurrence of terms inside the document itself.",
"We argue that such an assumption might not be sufficient to obtain meanEqually contribution.",
"Traditional document representation models such as Bag-of-words (BoW) and TF-IDF show competitive performance in some tasks (Wang and Manning, 2012).",
"However, these models treat words as flat tokens which may neglect other useful information such as word order and semantic distance.",
"This in turn can limit the models effectiveness on more complex tasks that require deeper level of understanding.",
"Further, BoW models suffer from high dimensionality and sparsity.",
"This is likely to prevent them from being used as input features for downstream NLP tasks.",
"Continuous vector representations for documents are being developed.",
"A successful thread of work is based on the distributional hypothesis, and use contextual information for context-word predictions.",
"Similar to Word2Vec (Mikolov et al., 2013), PV (Le and Mikolov, 2014) is optimized by predicting the next words given their contexts in a document, but it is conditioned on a unique document vector.",
"Word2Vec-based methods for computing document embeddings achieve state-of-the-art performance on document embedding.",
"Such methods rely on one strong underlying assumption: it is necessary to train the document embedding to optimize the prediction of the target words in the document.",
"In other words, the objective requires the model to learn to predict the target words in surface text.",
"We argue that there are several concerns with such a self-prediction assumption.",
"The strategy of predicting target words therefore only exploits in-document information, and do not explicitly model the inter-document distances.",
"We believe an ideal embedding space should also infer the relations among training documents.",
"For example, if all documents in the corpus are about machine learning , then the concept of machine learning becomes less critical in the embedding.",
"However, if the corpus contains documents from different areas of computer science, then the concept of machine learning should be encoded in any document relevant to it.",
"We therefore claim that the embedding of a document should not only depend on the document itself but also the other documents in the corpus , even though previous work seldom makes this consideration.",
"In addition, accurate predictions at the lexicon or word level do not necessarily reflect that the true semantics have been learned.",
"For example, in IMDB dataset review No.10007: ... the father did such a good job.",
"Obviously, good can be replaced with synonyms like nice without significantly altering the meaning of the sentence.",
"However, since the synonyms are treated as independent tokens in PV and Doc2VecC, the lexicon good must be predicted exactly.",
"Moreover, to accurately predict the final word job , the embedding probably only needs to know that did a good job is a very common phrase, without having to understand the true meaning of job .",
"This example shows that in order to accurately predict a local lexicon, the embedding might opt to encode the syntactic relationship instead of true semantics.",
"Enforcing document embeddings to make predictions at the word level could be too strong of an objective.",
"More specifically, we argue that the true semantics should not only depend on a small context, but also the relations with other training documents at document level.",
"To address the above concerns we propose a novel model for learning document embedding unsupervisedly.",
"In contrast with previous work (PV and Doc2Vec), we model documents according to two aspects.",
"First, we abandon the concept of context word prediction when training an embedding model.",
"Instead we propose a self-supervision learning framework to model inter-document information.",
"Conceptually, we use the embedding to determine whether a sentence belongs to a document.",
"Our encoder is equipped with a discriminator to classify whether a sentence embedding is derived from a document given that document's embedding.",
"This explicitly enforces documents to be spread reasonably in the embedding space without any labels so that they can be discriminated.",
"To the best of our knowledge, this is the first deep embedding work to explicitly model the inter-document relationship.",
"Second, in our approach the predictions are inferred at the sentence level.",
"This avoids the effect of only predicting the surface meaning in word level (e.g. good vs. nice).",
"Unlike previous work, our model is explicitly optimized to represent documents as combinations of sequence embedding beyond words seen in training.",
"Below we summarize the key contributions: We present a deep and general framework and a novel objective for learning document representation unsupervisedly.",
"Our models are end-to-end, easy to implement, and flexible to extend.",
"We perform experiments through sentiment analysis and topic classification to show that our model, referred to as self-discriminative document embedding (SDDE) , is competitive to the state-of-the-art solutions based on traditional context-prediction objectives.",
"Our extensive experiments quantitatively and qualitatively show that SDDE learns more effective features that capture more document-level information.",
"To the best of our knowledge, SDDE is the first deep network to model inter-instance information at document level.",
"We further propose to evaluate unsupervised document embedding models in weakly-supervised classification.",
"That is, lots of unlabeled documents with only few labels attached to some of them, which is a realistic scenario that unsupervised embedding could be particularly useful.",
"Here we give an overview of other related methods on learning unsupervised text representations.",
"Besides BoW and TF-IDF, Latent Dirichlet Allocation models (Deerwester et al., 1990; Blei et al., 2003) leverage the orthogonality of high-dimensional BoW features by clustering a probabilistic BoW for latent topics.",
"Several models extend from Word2Vec (Mikolov et al., 2013), using context-word predictions for training document embedding end-to-end.",
"PV (Le and Mikolov, 2014) keeps a document embedding matrix in memory and is jointly trained.",
"The required training parameters are linear to the number of documents and thus prohibit PV from being trained on a large corpus.",
"Moreover, expensive inference for new documents is required during testing.",
"To address the above concerns, Doc2VecC (Chen, 2017) combines PV and a denoising autoencoder (DEA) (Chen et al., 2012) with BoW vectors as global document information instead.",
"The final document embedding are then produced by simply averaging the jointly trained word embedding.",
"Another thread of work uses two-stage pipelines to construct sentence/document embedding from pre-trained word embedding.",
"Arora et al. (2017) propose post-processing weighting strategies on top of word embedding to build sentence representations.",
"WME (Wu et al., 2018) propose a random feature kernel method based on distance between pairs of words, which also shows inter-document information helps.",
"However, the cost scales with the size of training samples such that it is hard to be applied on large-scale dataset.",
"There have been more embedding work on sentences compared to documents.",
"These approaches mostly learn the sentence embedding by modeling the sentence-level (Kiros et al., 2015; Tang et al., 2017b,a; Logeswaran and Lee, 2018) or word-level (Pagliardini et al., 2018; Kenter et al., 2016; Arora et al., 2018) distribution hypothesis (Harris, 1954; Polajnar et al., 2015) in a large ordered corpus.",
"We note that the main difference between learning embedding for sentences and documents is that documents are not ordered in a corpus.",
"Some other work model sentences with RNN autoencoders (Hill et al., 2016a; Gan et al., 2017).",
"Documents often refer to long-length text containing multiple sentences, which might be hard to model with RNNs (Pascanu et al., 2013; Jing et al., 2017) and time-consuming on large corpus.",
"To facilitate downstream tasks, a document embedding is required to compress useful features into a compact representation.",
"It is not an easy task to learn discriminable features unsupervisedly since validation information is not accessible for training.",
"We first introduce some notations: V : the training corpus vocabulary of size |V| ; X = { X 1 , , X n } : a training corpus of document size n = |X | , in which each document X i is a set of sentences S i ; S i = { s 1 i , , s | x i | i } : a document divided into a set of sentences, of set size |S i | , in which each sentence s ji R |V| T j contains a sequence of variable length T j of word one-hot vectors w 1 j , , w T j j , each in R |V| 1 .",
"S is the set of total sentences n (cid:83) i =1 S i in the training corpus, of size m = |S| ; h w : the size of the word embedding and U R h w |V| : the word embedding projection matrix.",
"We use u w to denote the column in U for word w h s : the size of the sentence embedding and e s R h s : the embedding of sentence s .",
"d i R h s : document X i 's embedding.",
"that maps document X i to d i unsupervisedly.",
"Next, we formulate how SDDE represents a document, then introduce our self-discriminative learning procedure we use to train it.",
"We consider a document as mean of sentences , i.e., breaking a document into several subsequences.",
"We demonstrate several benefits of the design in SDDE.",
"First, decomposing long documents into shorter but reasonable semantic unit (e.g., sentences) makes encoding easier and faster since they can be processed in parallel.",
"Similar concepts of modeling documents hierarchically have shown benefits in some supervised tasks such as text classification (Yang et al., 2016).",
"It also makes the model insensitive to document length, which is important because length varies greatly in real documents (see Table 1).",
"In training, we further propose to represent a document during training using the average of only a subset of its sentences.",
"This special sentence-level dropout is beneficial for training by creating up to (cid:0) | X i | q (cid:1) combinations for each document, where q is the number of sentences to keep.",
"This enforces the local sentence representations to capture global information of a document by representing it with a subset of sentences in it.",
"The word embedding is used as globally shared building blocks for sentence embedding.",
"For a document X i = { s ji } , or S i , the embedding is derived from averaging the respective representations of subsequences.",
"Noted as: d i = 1 q q (cid:88) j =1 , s PS i ( s ) e j s , (1) where e s = E ( s ) , (2) where a sentence encoder E is introduced to produce sentence embedding for s ji .",
"In practice, sentences can be obtained by simply segmenting documents with punctuation.",
"In testing, the document embedding is obtained by averaging all the sentences in which: d = 1 |S i | (cid:88) s S i e s .",
"We note that averaging subsequences differs from averaging of words in two aspects.",
"First, each sentence is encoded individually before being averaged, allowing incorporation of word order into design rationale at least in a reasonable range.",
"Second, subsequences may have different lengths that reveal syntactic information.",
"To illustrate, BoW/mean-of-word models suffer from ambiguously modeling two different documents which are similar in word distributions but differ in some aspects of interest.",
"Mean-of-sentence model avoids such concern by modeling documents at the sentence level.",
"It could be expected that it is much less likely to find two documents with similar sentence distribution than similar word distribution.",
"Mean-of-sentences formulation can be smoothly reduced to mean-of-word models (by treating each word as a sentence) or pure sequence models (by treating each document as a very long sentence).",
"Unlike PV or Doc2VecC which emphasize modeling distributional information within individual documents, we model relations across documents.",
"The basic idea is that we hope to learn an embedding for each sentence in the document as well as a discriminator that determines whether a sentence belongs to a document.",
"Self-discriminative learning uses a discriminator network D to determine whether a sentence belongs to a document.",
"The aim is to learn a suitable embedding and a good discriminator to determine if a sentence belongs to Algorithm 1 Self-Discriminative Learning for Unsupervised Document Embedding Input: Documents X = { X i } n 1 , p , k , h w , h s .",
"We propose an objective that explicitly optimizes SDDE towards representing a document with mean of (encoded) sentences.",
"To optimize the discriminator D , we formulate it as a binary classifier that takes pairs of document embedding d of a document X i and a sentence embedding e s , ( d , e s ), as inputs.",
"The discriminator is asked to discriminate using d whether the sentence s belongs to the document X i or the other documents X (cid:48) X \\ { X i } .",
"The loss then becomes: log (1 D ( d , e p s )) + k (cid:88) (cid:96) =1 E s (cid:48) PS , s (cid:48) / S i (cid:104) log ( D ( d , e (cid:96) s (cid:48) )) (cid:105) , (4) for each document with one positive sample e p s and k negative samples of sentences e s (cid:48) , where s (cid:48) are not in the sentence set S i of X i , as s (cid:48) / S i .",
"Note that e p s is not used for d otherwise it would be trivial to be solved by the discriminator.",
"The spirits of self-discriminative learning can be understood as unsupervisedly mimicking supervised inference without knowledge of any label information by treating sentences from other documents as fake/negative samples.",
"One main concern that it is possible to find similar sentences in two different documents.",
"Our discriminator particularly addresses this issue by optimizing for the most discriminative sentences rather than similar ones that might not be critical to shape the embedding.",
"To minimize the loss, the encoder would tend to preserve the most essential feature to facilitate the discriminator to push away any two documents, which should encourage the embedding points spread even more widely across the space.",
"This in turn should result in more ease in downstream tasks: for example in learning a decision hyperplane in a classification task.",
"Next, we narrow down to sentence encoder E .",
"Given a sequence of word one-hot vectors as a sentence s = [ w 1 , . . . , w T ] , we project them into an embedding layer U to retrieve their corresponding word embedding.",
"Note that the word embedding are trained jointly.",
"where G is a single-layer RNN encoder using GRU cells to process the word embedding sequences and is a linear transform for dimension h s of sentence embedding.",
"Our second method, we use a schema of mean-of-word for advantage of fast generation, we average the word embedding w within a sentence s along time axis as AVG encoder: E ( s ) = ( ReLU ( 1 | s | | s | (cid:88) i =1 Uw ti )) , (6) Let us stress that the role of encoder E is to extract local feature from every sentence, and the overall objective encourages SDDE to represent documents as mean of sentence embedding.",
"An undesired pitfall comes from a learned weak encoder with a powerful discriminator causing the embedding produced by the encoder useless for downstream tasks.",
"To avoid such a pitfall, we Dataset #Class #Train / #Test Doc Length Sent Length IMDB 2 75k / 25k 124.6 8,856.7 11.6 105.5 Yelp P. 2 560k / 38k 70.0 4,117.8 8.2 48.7 AG's News 4 120k / 7.6k 27.2 66.1 11.8 66.7 DBPedia 14 560k / 70k 32.8 231.3 9.7 68.6 Table 1: Statistics of the datasets.",
"adopt lightweight network structures for discriminators.",
"For the IMDB datasets, we find inner product ( dV ) t E ( s ) with a learnable matrix V sufficient.",
"For the other datasets in Table 1, two fully-connected layers with ReLU activations in latent are used.",
"We relate our method to the Negative Sampling (Mikolov et al., 2013) technique which is a simpli-fied objective of softmax approximation (Mikolov et al., 2013; Mnih and Teh, 2012; Zoph et al., 2016).",
"Negative sampling has been used as an efficient and effective technique in learning word embedding.",
"We reformulate it to train document embedding by sampling in sentence level, which is easy to implement and efficient to train just like Word2Vec (Mikolov et al., 2013).",
"In practice, when training with mini-batches the documents for negative samples are from the same mini-batch, which requires small extra computation efforts.",
"SDDE requires a similar number of parameters as Doc2VecC does, but much less than PV.",
"In addition, the sentence encoder is flexible and can incorporate other techniques of text processing such as attention methods.",
"Public datasets on sentiment analysis and topic classification across diverse domains including online reviews, news, and Wiki pages are used including IMDB dataset (Maas et al., 2011) and the others from Zhang et al. (2015).",
"Table 1 provides a summary.",
"Only the training splits are used in training embedding with subsampled training split for cross-validations.",
"We preprocess the datasets by normalizing the text to lower class and replacing words appearing less than 10 times with a UNK token.",
"Out-of-vocabulary words in testing set are also replaced.",
"All our baseline models use the same input data.",
"To define the sentences for experiment, we utilize the sentence tokenizer from NLTK.",
"For the documents containing only one sentence we simply divide it into multiple subsequences for sampling.",
"We use RMSProp method for optimization.",
"Dropout 50% of input to the discriminator.",
"Weights are random-uniformly initialized between [-1, 1].",
"The other hyperparameters are summarized in Table 2. All the models use the same embedding size for fair comparison.",
"The trained document embedding are used for all the evaluations without specific tuning.",
"Generally, it is not easy to evaluate an unsupervised embedding model.",
"In Section 4.3 and 4.4, we evaluate the performance on standard document classification following the common practice used by previous work (Chen, 2017; Wu et al., 2018; Arora et al., 2017): a classification task with a linear SVM (Fan et al., 2008) trained on the labels in each dataset.",
"Next, we study unsupervised document embedding on two novel aspects.",
"In Section 4.5, we study a weakly-supervised classification setting that fits the realistic scenario of using unsupervised embedding with only a few labels.",
"In Section 4.6, we provide a metric to evaluate the effectiveness of modeling inter-document information.",
"We first compare our models with the others state-of-the-art competitors.",
"RNN-LM (Mikolov et al., 2010) and Skip-thought (Kiros et al., 2015) are RNN-based.",
"SIF (Arora et al., 2017), W2V-AVG (Mikolov et al., 2013), and WME (Wu et al., 2018) are two-stage approach that post-processing on word embedding.",
"We collect the results reported on the widely-used benchmark sentiment classification dataset IMDB.",
"For PV, we use Gensim implementation ( Rehurek and Sojka, 2010); versions https://www.nltk.org/ Model Error% Skip-thought* (Kiros et al., 2015) 17.4 SIF (GloVe) (Arora et al., 2017) 15.0 RNN-LM* (Mikolov et al., 2010) 13.6 W2V-AVG* (Mikolov et al., 2013) 12.7 DEA* (Chen et al., 2012) 12.5 PV-DM 20.0 PV-DBoW 12.0 WME (Wu et al., 2018) 11.5 Doc2VecC* (Chen, 2017) 11.7 SDDE-AVG 10.6 SDDE-RNN 10.2 Table 3: Sentiment Classification on IMDB Benchmark.",
"of both Distributed Memory (DM) and Distributed Bag of Words (DBoW) are reported.",
"For different encoders in SDDE, AVG is for averaging word embedding and RNN is for the RNN encoder.",
"Self-Discriminative Learning is Effective From the experiment result in Table 3, we can see that our self-discriminative learning is effective and superior on the document embedding models for both AVG and RNN versions.",
"SDDE-RNN achieves best accuracy on IMDB dataset 1.5% margin against Doc2VecC.",
"Study the Property of SDDE Unlike previous work modeling documents on the word or short context level, SDDE operates on the sentence level.",
"We study the false and true predictions output by SVM upon SDDE in comparison with Doc2VecC.",
"Table 5 show some examples that have the largest difference.",
"We observed SDDE can better capture contradicting or contrasting opinions.",
"We observe some wrong predictions (Row 3) are due to the ambivalent reviews.",
"SDDE is insensitive to the number of sentences; we found the effect of the number of sentences per document was trivial as shown in Table 4.",
"Next, we borrow some public large-scale dataset in Table 1 to further validate the effectiveness of SDDE compared to the other models.",
"For Doc2vecC and SIF, we use the code from the au-Label: 1 i don t even like watching those late night talk shows , but i found this one really interesting .",
"thors.",
"We use SIF to generate document embedding with Word2Vec trained on each dataset as its inputs.",
"Results are shown in Table 6.",
"SDDE-AVG performs slightly better across different dataset.",
"We hypothesis SDDE gets larger improvement on IMDB dataset since SDDE can handle longer documents better by exploiting sentence embeddings.",
"On the other hand, the RNN version of SDDE performs significantly worse than the word-averaging version.",
"We may remind the reader that state-of-the-art unsupervised document embedding models are not RNN-based.",
"The effects of word order are still unclear.",
"Wieting and Gimpel (2017) provides a study of sentence embedding.",
"We hypothesize that it may be difficult for an RNN encoder to learn to incorporate multi-domain information in datasets with many classes (e.g., DBpedia) unsupervisedly.",
"This would be our future work.",
"Next, we consider a more real-world weakly-supervised learning scenario: classification on the",
"datasets we have used in previous experiments, but this time only when very few labels are available.",
"We hypothesize that SDDE is particularly useful for classification with few labels since the self-discriminative learning has exploited the possible features to map the text onto the embedding space properly during the representation learning phase.",
"The embedding is expected to be more discriminable to facilitate finding the classification decision hyperplanes with fewer labeled data.",
"We randomly sample equal number of instances from each class to train a SVM and verify with the whole testing set.",
"PV-DBoW, Doc2VecC, and SDDE-AVG are examined in this experiment.",
"We use the same pre-trained document embedding as in the previous experiments.",
"We repeat the whole procedure 30 times and report the means and tune the penalty parameter C to find the best value for each model.",
"Results in Figure 2 and Table 7 show SDDEs outperform PV and Doc2VecC.",
"We visualize the training points with t-SNE.",
"As shown in Figure 1, SDDE seems to be able to spread the embedded data more widely, which eventually leads to better usage of scarce data for classification.",
"Definition We examine our assumption of the ability of SDDE to model inter-document feature.",
"Similar to (Hill et al., 2016b), we consider pairwise cosine similarity between documents with topic labels, this allows us to quantitatively evaluate unsupervised document embedding at inter-document level.",
"Our assumption is that: if pairwise similarity between documents is calculated based on different kinds of embedding, the better embedding results should comply with the properties of both high similarities between those documents within the same underlying class, denoted as IntraCos ( d, d (cid:48) ) and low similarities between document pairs from different classes, or InterCos ( d, d ) .",
"The mean distance: mean ( IntraCos ) mean ( InterCos ) , (7) is considered as our metric to avoid simply maximizing IntraCos or minimizing InterCos.",
"SDDE Provides High Separation Table 8 shows the evaluation.",
"The distances (Eq. 7) for the baseline models are small, which support our assumption that these methods are not able to model inter-document features properly.",
"On the other hand, distances for SDDE are significantly larger.",
"With the classification experiments, we believe SDDE better preserves meaningful inter-document features.",
"Figure 3 shows some meaningful clusters in SDDE in the World class as cohesive sub-classes.",
"Compared to mainstream unsupervised document embedding models (trained to perform predictions on the lexicon level) SDDE embeddings capture information at the inter-document level, as they are trained to maximize the distance between a sentence and a corresponding document.",
"We hope the underlying idea of SDDE offers the document-embedding community a new investigation direction.",
"Self-discriminative learning shows potential for real-world scarcely-labeled scenarios, and our future work will focus on joint training of representations for semi-supervised learning.",
"In NAACL HLT , pages 13671377.",
"Tom Kenter, Alexey Borisov, and Maarten de Rijke.",
"2016.",
"Siamese CBOW: optimizing word embeddings for sentence representations.",
"In ACL .",
"Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urta-sun, and Sanja Fidler.",
"2015.",
"Skip-thought vectors.",
"In NIPS , pages 32943302.",
"Quoc V. Le and Tomas Mikolov.",
"2014.",
"Distributed representations of sentences and documents.",
"In ICML , pages 11881196.",
"Lajanugen Logeswaran and Honglak Lee.",
"2018.",
"An efficient framework for learning sentence representations.",
"In ICLR .",
"Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean.",
"2013.",
"Distributed representations of words and phrases and their composition-ality.",
"In NIPS , pages 31113119.",
"This material is based upon work supported by Mi-crosoft Research Asia (MSRA) grant, and by Taiwan Ministry of Science and Technology (MOST) under grant number 108-2634-F-002 -019."
] | [
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"result",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Maximilian Nickel Facebook AI Research [email protected]",
"We investigate grounded sentence representations, where we train a sentence encoder to predict the image features of a given caption i.e., we try to imagine how a sentence would be depicted visuallyand use the resultant features as sentence representations.",
"We examine the quality of the learned representations on a variety of standard sentence representation quality benchmarks, showing improved performance for grounded models over non-grounded ones.",
"In addition, we thoroughly analyze the extent to which grounding contributes to improved performance, and show that the system also learns improved word embeddings.",
"Following the word embedding upheaval of the past few years, one of NLP's next big challenges has become the hunt for universal sentence representations: generic representations of sentence meaning that can be plugged into any kind of system or pipeline.",
"Examples include Paragraph2Vec (Le and Mikolov, 2014), C-Phrase (Pham et al., 2015), SkipThought (Kiros et al., 2015) and Fast-Sent (Hill et al., 2016a).",
"These representations tend to be learned from large corpora in an unsupervised setting, much like word embeddings, and eectively transferred to the task at hand.",
"Purely text-based semantic models, which represent word meaning as a distribution over other words (Harris, 1954; Turney and Pantel, 2010; Clark, 2015), suer from the grounding problem (Harnad, 1990).",
"It has been shown that grounding leads to improved performance on a variety of word-level tasks (Baroni, 2016; Kiela, 2017).",
"Unsupervised sentence representation models are often doubly exposed to the grounding problem, especially if they represent sentence mean-1 Work done while at Facebook AI Research.",
"ings as a distribution over other sentences, as in SkipThought (Kiros et al., 2015).",
"Here, we examine whether grounding also leads to improved sentence representations.",
"In short, the grounding problem is characterized by the lack of an association between symbols and external information.",
"We address this problem by aligning text with paired visual data and hypothesize that sentence representations can be enriched with external informationi.e., groundedby forcing them to capture visual semantics.",
"We investigate the performance of these representations and the eect of grounding on a variety of semantic benchmarks.",
"There has been much recent interest in generating actual images from text (Goodfellow et al., 2014; van den Oord et al., 2016; Mansimov et al., 2016).",
"Our method takes a slightly dierent approach: instead of predicting actual images, we train a deep recurrent neural network to predict the latent feature representation of images.",
"That is, we are specifically interested in the semantic content of visual representations and how useful that information is for learning sentence representations.",
"One can think of this as trying to imagine, or form a mental picture, of a sentence's meaning (Chrupaa et al., 2015).",
"Much like a sentence's meaning in classical semantics is given by its model-theoretic ground truth (Tarski, 1944), our ground truth is provided by images.",
"Grounding is likely to be more useful for concrete words and sentences: a sentence such as democracy is a political system does not yield any coherent mental picture.",
"In order to accommodate the fact that much of language is abstract, we take sentence representations obtained using text-only data (which are better for representing abstract meaning) and combine them with the grounded representations that our system learns (which are good for representing concrete meaning), leading to multi-modal sentence representations.",
"In what follows, we introduce a system for grounding sentence representations by learning to predict visual content.",
"Although it is not the primary aim of this work, it is important to first examine how well this system achieves what it is trained to do, by evaluating on the COCO5K image and caption retrieval task.",
"We then analyze the performance of grounded representations on a variety of sentence-level semantic transfer tasks, showing that grounding increases performance over text-only representations.",
"We then investigate an important open question in multi-modal semantics: to what extent are improvements in semantic performance due to grounding, rather than to having more data or data from a dierent distribution?",
"In the remainder, we analyze the role that concreteness plays in representation quality and show that our system learns grounded word embedding projections that outperform non-grounded ones.",
"To the best of our knowledge, this is the first work to comprehensively study grounding for distributed sentence representations on such a wide set of semantic benchmark tasks.",
"Sentence representations Although there appears to be a consensus with regard to the methodology for learning word representations, this is much more of an open problem for sentence representations.",
"Recent work has ranged from trying to learn to compose word embeddings (Le and Mikolov, 2014; Pham et al., 2015; Wieting et al., 2016; Arora et al., 2017), to neural architectures for predicting the previous and next sentences (Kiros et al., 2015) or learning representations via large-scale supervised tasks (Conneau et al., 2017).",
"In particular, SkipThought (Kiros et al., 2015) led to an increased interest in learning sentence representations.",
"Hill et al. (2016a) compare a wide selection of unsupervised and supervised methods, including a basic caption prediction system that is similar to ours.",
"That study finds that dier-ent learning methods are preferable for dierent intended applications, i.e., that the matter of optimal universal sentence representations is as of yet far from decided.",
"InferSent (Conneau et al., 2017) recently showed that supervised sentence representations can be of very high quality.",
"Here, we learn grounded sentence representations in a supervised setting, combine them with standard unsupervised sentence representations, and show how grounding can help for a variety of sentence-level tasks.",
"Multi-modal semantics Language grounding in semantics has been motivated by evidence that human meaning representations are grounded in perceptual experience (Jones et al., 1991; Perfetti, 1998; Andrews et al., 2009; Riordan and Jones, 2011).",
"That is, despite ample evidence of humans representing meaning with respect to an external environment and sensorimotor experience (Barsalou, 2008; Louwerse, 2008), standard semantic models rely solely on textual data.",
"This gives rise to an infinite regress in text-only semantic representations, i.e., words are defined in terms of other words, ad infinitum .",
"The field of multi-modal semantics, which aims to address this issue by enriching textual representations with information from other modalities, has mostly been concerned with word representations (Bruni et al., 2014; Baroni, 2016; Kiela, 2017, and references therein).",
"Learning multi-modal representations that ground text-only representations has been shown to improve performance on a variety of core NLP tasks.",
"This work is most closely related to that of Chrupaa et al. (2015), who also aim to ground language by relating images to captions: here, we additionally address abstract sentence meaning; have a dierent architecture, loss function and fusion strategy; and explicitly focus on grounded universal sentence representations.",
"Bridging vision and language There is a large body of work that involves jointly embedding images and text, at the word level (Frome et al., 2013; Joulin et al., 2016), the phrase level (Karpathy et al., 2014; Li et al., 2016), and the sentence level (Karpathy and Fei-Fei, 2015; Klein et al., 2015; Kiros et al., 2015; Chen and Zitnick, 2015; Reed et al., 2016).",
"Our model similarly learns to map sentence representations to be consistent with a visual semantic space, and we focus on studying how these grounded text representations transfer to NLP tasks.",
"Moreover, there has been a lot of work in recent years on the task of image caption generation (Bernardi et al., 2016; Vinyals et al., 2015; Mao et al., 2015; Fang et al., 2015).",
"Here, we do the opposite: we predict the correct image (features) from the caption, rather than the caption from the image (features).",
"Similar ideas were recently successfully applied to multi-modal machine translation 409 (Elliott and Kdr, 2017; Gella et al., 2017; Lee et al., 2017).",
"Recently, Das et al. (2017) trained dialogue agents to communicate about images, trying to predict image features as well.",
"In the following, let D = {( I k , C k )} Nk = 1 be a dataset where each image I k is associated with one or more captions C k = { C 1 , .",
".",
"., C |C| k } .",
"A prominent example of such a dataset is COCO (Lin et al., 2014), which consists of images with up to 5 corresponding captions for each image.",
"The objective of our approach is to encode a given sentence, i.e., a caption C , and learn to ground it in the corresponding image I .",
"To encode the sentence, we train a bidirectional LSTM (BiLSTM) on the caption, where the input is a sequence of projected word embeddings.",
"We combine the final left-to-right and right-to-left hidden states of the LSTM and take the element-wise maximum to obtain a sentence encoding.",
"We then examine three distinct methods for grounding the sentence encoding.",
"In the first method, we try to predict the image features (Cap2Img).",
"That is, we learn to map the caption to the same space as the image features that represent the correct image.",
"We call this strong perceptual grounding, where we take the visual input directly into account.",
"An alternative method is to exploit the fact that one image in COCO has multiple captions (Cap2Cap), and to learn to predict which other captions are valid descriptions of the same image.",
"This approach is strictly speaking not perceptually grounded, but exploits the fact that there is an implicit association between the captions and the shared underlying image, and so could be considered a weaker version of grounding.",
"Finally, we experiment with a model that optimizes both these objectives jointly: that is, we predict both images and alternative captions for the same image (Cap2Both).",
"Thus, Cap2Both incorporates both strong perceptual and weak implicit grounding.",
"Please see Figure 1 for an illustration of the various models.",
"In what follows, we discuss them in more technical detail.",
"To learn sentence representations, we employ a bidirectional LSTM architecture.",
"In particular, let x = ( x 1 , . . ., x T ) be an input sequence where each word is represented via an embedding x t R n .",
"Using a standard LSTM (Hochreiter and Schmid-huber, 1997), the hidden state at time t , denoted h t R m , is computed via h t + 1 , c t + 1 = LSTM ( x t , h t , c t | ) where c t denotes the cell state of the LSTM and where denotes its parameters.",
"To exploit contextual information in both input directions, we process input sentences using a bidirectional LSTM, that reads an input sequence in both normal and reverse order.",
"In particular, for an input sequence x of length T , we compute the hidden state at time t , h t R 2 m via h ft + 1 = LSTM ( x t , h ft , c ft | f ) h bt + 1 = LSTM ( x T t , h bt , c bt | b ) Here, the two LSTMs process x in a forward and a backward order, respectively.",
"We subsequently use max : R d R d R d to combine them into their element-wise maximum, yielding the representation of a caption after it has been processed with the BiLSTM: h T = max ( h ft , h bt ) We use GloVe vectors (Pennington et al., 2014) for our word embeddings.",
"The embeddings are kept fixed during training, which allows a trained sentence encoder to transfer to tasks (and a vocabulary) that it has not yet seen, provided GloVe embeddings are available.",
"Since GloVe representations are not tuned to represent grounded information, we learn a global transformation of GloVe space to grounded word space.",
"Specifically, let x R n be the original GloVe embeddings.",
"We then learn a linear map U R n n such that x = U x and use x as input to the BiLSTM.",
"The linear map U and the BiLSTM are trained jointly.",
"Let v RI be the latent representation of an image (e.g.the final layer of a ResNet).",
"To ground captions in the images that they describe, we map h T into the latent space of image representations such that their similarity is maximized.",
"In other words, we aim to predict the latent features of an image from its caption.",
"The mapping of caption to image space is performed via a series of projections p 0 = h T p + 1 = ( P p ) 410 Funny cat sitting on laptop max( , ) Cute kitten laying on keyboard C a p 2 I m g C a p 2 C a p Figure 1: Model architecture: predicting either an image (Cap2Img), an alternative caption (Cap2Cap), or both at the same time (Cap2Both). where denotes a non-linearity such as ReLUs or tanh. By jointly training the BiLSTM with these latent projections, we can then ground the language model in its visual counterpart. In particular, let = BiLSTM { P } L = 1 be the parameters of the BiLSTM as well as the projection layers. We then minimize the following ranking loss: L C2I ( ) = (cid:213) ( I , C )D f rank ( I , C ) + f rank ( C , I ) (1) where f rank ( a , b ) = (cid:213) b 0 N a [ sim ( a , b ) + sim ( a , b 0 )] + where [ x ] + = max ( 0 , x ) denotes the threshold function at zero and defines the margin. Furthermore, N a denotes the set of negative samples for an image or caption and sim ( , ) denotes a similarity measure between vectors. In the following, we employ the cosine similarity, i.e., sim ( a , b ) = h a , b i k a kk b k . Although this loss is not smooth at zero, it can be trained end-to-end using subgradient methods. Compared to e.g. an l 2 regression loss, Equation (1) is less susceptible to error incurred by subspaces of the visual representation that are irrelevant to the high level visual semantics. Empirically, we found it to be more robust to overfitting. 3.3 Cap2Cap Let x = ( x 1 , . . ., x T ) , y = ( y 1 , . . ., y S ) be a caption pair that describes the same image. To learn weakly grounded representations, we employ a standard sequence-to-sequence model (Sutskever et al., 2014), whose task is to predict y from x . As in the Cap2Cap model, let h T be the representation of the input sentence after it has been processed with a BiLSTM. We then model the joint probability of y given x as p ( y | x ) = S (cid:214) s = 1 p ( y s | h T , y 1 , . . ., y s 1 , ) . To model the conditional probability of y s we use the usual multiclass classification approach over the vocabulary of the corpus V such that p ( y s = k | h T , y 1 , . . ., y s 1 , ) = e h v k , y s i |V| j = 1 e h v j , y s i . Here, y s = ( WV g s + b ) and g s is hidden state of the decoder LSTM at time s . To learn the model parameters, we minimize the negative log-likelihood over all caption pairs, i.e., L C2C ( ) = (cid:213) x , y D | y | (cid:213) s = 1 log p ( y s | h T , y 1 , . . ., y s 1 , ) . 3.4 Cap2Both Finally, we also integrate both concepts of grounding into a joint model, where we optimize the following loss function: L C2B ( ) = LC 2 I ( ) + LC 2 C ( ) . 411 3.5 Grounded universal representations On their own, features from this system are likely to suer from the fact that training on COCO introduces biases: aside from the inherent dataset bias in COCO itself, the system will only have coverage for concrete concepts. COCO is also a much smaller dataset than e.g. the Toronto Books Corpus often used in purely text-based methods (Kiros et al., 2015). As such, grounded representations are potentially less universal than text-based alternatives, which also cover abstract concepts. There is evidence that meaning is dually coded in the human brain: while abstract concepts are processed in linguistic areas, concrete concepts are processed in both linguistic and visual areas (Paivio, 1990). Anderson et al. (2017) recently corroborated this hypothesis using semantic representations and fMRI studies. In our case, we want to be able to accommodate concrete sentence meanings, for which our vision-centric system is likely to help; as well as abstract sentence meanings, where trying to imagine what democracy is a political system might look like will probably only introduce noise. Hence, we optionally complement our systems' representations with more abstract universal sentence representations trained on language-only data (specifically, the Toronto Books Corpus).",
"Although it would be interesting to examine multitask scenarios where these representations are jointly learned, we leave this for future work.",
"Here, instead, we combine grounded and language-only representations using simple concatenation, i.e., r gs = r grounded || r ling only .",
"Concatenation has been proven to be a strong and straightforward mid-level multi-modal fusion method, previously explored in multi-modal semantics for word representations (Bruni et al., 2014; Kiela and Bot-tou, 2014).",
"We call the combined system GroundSent (GS), and distinguish between sentences perceptually grounded in images (GroundSent-Img), weakly grounded in captions (GroundSent-Cap) or grounded in both (GroundSent-Both).",
"We use 300-dimensional GloVe (Pennington et al., 2014) embeddings, trained on WebCrawl, for the initial word representations and optimize using Adam (Kingma and Ba, 2015).",
"We use ELU (Clev-ert et al., 2016) for the non-linearity in projection layers, set dropout to 0.5 and use a dimensionality of 1024 for the LSTM.",
"The network was initialized with orthogonal matrices for the recurrent layers (Saxe et al., 2014) and He initialization (He et al., 2015) for all other layers.",
"The learning rate and margin were tuned on the validation set using grid search.",
"We use the same COCO splits as Karpathy and Fei-Fei (2015) for training (113,287 images), validation (5000 images) and testing (5000 images).",
"Image features for COCO were obtained by transferring the final layer from a ResNet-101 (He et al., 2016) trained on ImageNet (ILSVRC 2015).",
"We are specifically interested in how well (grounded) universal sentence representations transfer to dierent tasks.",
"To evaluate this, we perform experiments for a variety of tasks.",
"In all cases, we compare against layer-normalized SkipThought vectors, a well-known high-performing sentence encoding method (Ba et al., 2016).",
"To ensure that we use the exact same evaluations, with identical hyperparameters and settings, we evaluate all systems with the same evaluation pipeline, namely SentEval (Conneau and Kiela, 2018) 2 .",
"Following previous work in the field, the idea is to take universal sentence representations and to learn a simple classifier on top for each of the transfer tasksthe higher the quality of the sentence representation, the better the performance on these transfer tasks should be.",
"We evaluate on the following well-known and widely used evaluations: movie review sentiment (MR) (Pang and Lee, 2005), product reviews (CR) (Hu and Liu, 2004), subjectivity classification (SUBJ) (Pang and Lee, 2004), opinion polarity (MPQA) (Wiebe et al., 2005), paraphrase identifi-cation (MSRP) (Dolan et al., 2004) and sentiment classification (SST, binary version) (Socher et al., 2013).",
"Accuracy is measured in all cases, except for MRPC, which measures accuracy and the F1-score.",
"2 See https://github.com/facebookresearch/SentEval.",
"The aim of SentEval is to encompass a comprehensive set of benchmarks that has been loosely established in the research community as the standard for evaluating sentence representations.",
"Recent years have seen an increased interest in entailment classification as an appropriate evaluation of sentence representation quality.",
"We evaluate representations on two well-known entailment, or natural language inference, datasets: the large-scale SNLI dataset (Bowman et al., 2015) and the SICK dataset (Marelli et al., 2014).",
"We implement a simple logistic regression on top of the sentence representation.",
"In the cases of SNLI and SICK, as is the standard for these datasets, the representations for the individual sentences u and v are combined by using h u , v , u v , | u v |i as the input features.",
"We tune the seed and an l 2 penalty on the validation sets for each, and train using Adam (Kingma and Ba, 2015), with a learning rate of 0.001 and a batch size of 32.",
"Although it is not the primary aim of this work to learn a state-of-the-art image and caption retrieval system, it is important to first establish the capability of our system to do what it is trained to do.",
"Table 1 shows the results on the COCO5K caption and image retrieval tasks for the two models that predict image features.",
"We compare our system against several wellknown approaches, namely Deep Visual-Semantic Alignments (DVSA) (Karpathy and Fei-Fei, 2015), Fisher Vectors (FV) (Klein et al., 2015) and Order Embeddings (OE) (Vendrov et al., 2015).",
"As the results show, Cap2Img performs very well on this task, outperforming the compared models on caption retrieval and being very close to order embeddings on image retrieval 3 .",
"The fact that the system outperforms Order Embeddings on caption retrieval suggests that it has a better sentence encoder.",
"Cap2Both does not work as well on this task as the image-only case, probably because interference from the language signal makes the problem harder to optimize.",
"The results indicate that the system has learned to predict image features from captions, and captions from images, at a level exceeding or close to the state-of-the-art on this task.",
"Having established that we can learn high-quality grounded sentence encodings, the core question we now wish to examine is how well grounded sentence representations transfer.",
"In this section, we combine our grounded features with the 3 In fact, we found that we can achieve better performance on this task by reducing the dimensionality of the encoder.",
"A lower dimensionality in the encoder also reduces the transferability of the features, unfortunately, so we leave a more thorough investigation of this phenomenon for future work.",
"high-quality layer-normalized SkipThought representations of Ba et al. (2016), leading to multimodal sentence representations as described in Section 3.5.",
"That is, we concatenate Cap2Cap, Cap2Img or Cap2Both and Skip-Thought with Layer Normalization (ST-LN) representations, yielding GroundSent-Cap, GroundSent-Img and GroundSent-Both representations, respectively.",
"We report performance of ST-LN using SentEval, which led to slightly dierent numbers than what is reported in their paper 4 .",
"Table 2 shows the results for the semantic classification and entailment tasks.",
"Note that all systems use the exact same evaluation pipeline, which makes them directly comparable.",
"We can see that in all cases, grounding increases the performance.",
"The question of which type of grounding works best is more dicult: generally, grounding with Cap2Cap and Cap2Both appears to do slightly better on most tasks, but on e.g. SST, Cap2Img works better.",
"The entailment task results (SNLI and SICK in Table 2) show a similar picture: in all cases grounding improves performance.",
"It is important to note that, in this work, we are not necessarily concerned with replacing the state-of-the-art on these tasks: there are systems that perform better.",
"We are primarily interested in whether grounding helps relative to text-only baselines.",
"We find that it does.",
"4 This is probably due to dierent seeds, optimization methods and other minor implementational details that differ between the original work and SentEval.",
"An important open question is whether the increase in performance in multi-modal semantic models is due to qualitatively dierent information from grounding , or simply due to the fact that we have more parameters or data from a dierent distribution .",
"In order to examine this, we implement a SkipThought-like model that also uses a bidirectional LSTM with element-wise max on the final hidden layer (henceforth referred to as STb).",
"This model is architecturally identical to the sentence encoder used before: it can be thought of as Cap2Cap, but where the objective is not to predict an alternative caption, but to predict the previous and next sentence in the Toronto Books Corpus, just like SkipThought (Kiros et al., 2015).",
"We train a 1024-dimensional and 2048-dimensional STb model (for one full iteration, with all other hyperparameters identical to Cap2Cap) to compare against: if grounding improves results because it introduces qualitatively dierent information, rather than just from having more parameters (i.e., a higher embedding dimensionality), we should expect the multi-modal GroundSent models to perform better not only than STb-1024, but also than STb-2048, which has the same number of parameters (recall that GroundSent models are combinations of grounded and linguistic-only representations).",
"In addition, we compare against an ensemble of two dierent STb-1024 models (i.e., a concatenation of two separately trained STb-1024), to check that we are not (just) observing an ensemble eect.",
"As Table 3 shows, a more nuanced picture emerges in this comparison: grounding helps more for some datasets than for others.",
"Grounded models outperform the STb-1024 model (which uses much more datathe Toronto Books Corpus is much larger than COCO) in all cases, often already without concatenating the textual modality.",
"The ensemble of two STb-1024 models performs better than the individual one, and so does the higher-dimensional one.",
"In the cases of CR and MRPC (F1), it appears that improved performance is due to having more data or ensemble eects.",
"For the other datasets, grounding clearly yields better results.",
"These results indicate that grounding does indeed capture qualitatively dierent information, yielding better universal sentence representations.",
"There are a few other important questions to investigate.",
"The average abstractness or concreteness of the evaluation datasets may have a large impact on performance.",
"In addition, word embeddings from the learned projection from GloVe input embeddings, which now provides a generic word-embedding grounding method even for words that are not present in the image-caption training data, can be examined.",
"As we have seen, performance across datasets and models can vary substantially.",
"A dataset's concreteness plays an important role in the relative merit of applying grounding: a dataset consisting mostly of abstract words is less likely to benefit from grounding than one that uses mostly concrete words.",
"In order to examine this eect, we calculate the average concreteness of the evalua-Model MEN SimLex RW W353 GloVe 0.805 0.408 0.451 0.738 Cap2Both 0.819 0.467 0.487 0.712 Cap2Img 0.845 0.515 0.523 0.753 Table 5: Spearman s correlation on four standard semantic similarity evaluation benchmarks.",
"tion datasets used in this study.",
"Table 4 shows the average human-annotated concreteness ratings for all words (where available) in each dataset.",
"The ratings were obtained by Brysbaert et al. (2014) in a large-scale study, yielding scores for 40,000 English words.",
"We observe that the two entailment datasets are more concrete, which is due to the fact that the premises are derived from caption datasets (Flickr30K in the case of SNLI; Flickr8K and video captions in the case of SICK).",
"This explains why grounding can clearly be seen to help in these cases.",
"For the semantic classification tasks, the more concrete datasets are MRPC and SST.",
"The picture is less clear for the first, but in SST we see that the grounded representations definitely do work better.",
"Concreteness values make it easier to analyze performance, but are apparently not always direct indicators of improvements with grounding.",
"Our models contain a projection layer that maps the GloVe word embeddings that they receive as inputs to a dierent embedding space.",
"There has been a lot of interest in grounded word representations in recent years, so it is interesting to examine what kind of word representations our models learn.",
"We omit Cap2Cap for reasons of space (it performs similarly to Cap2Both).",
"As shown in Table 5, the grounded word projections that our network learns yield higher-quality word embeddings on four standard lexical semantic similarity benchmarks: MEN (Bruni et al., 2014), SimLex-999 (Hill et al., 2016b), Rare Words (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001).",
"We have investigated grounding for universal sentence representations.",
"We achieved good performance on caption and image retrieval tasks on the large-scale COCO dataset.",
"We subsequently showed how the sentence encodings that the sys-415 tem learns can be transferred to various NLP tasks, and that grounded universal sentence representations lead to improved performance.",
"We analyzed the source of improvements from grounding, and showed that the increased performance appears to be due to the introduction of qualitatively dierent information (i.e., grounding), rather than simply having more parameters or applying ensemble methods.",
"Lastly, we showed that our systems learned high-quality grounded word embeddings that outperform non-grounded ones on standard semantic similarity benchmarks.",
"It could well be that our methods are even more suited for more concrete tasks, such as visual question answering, visual storytelling, or image-grounded dialogue an avenue worth exploring in future work.",
"In addition, it would be interesting to explore multi-task learning for sentence representations where one of the tasks involves grounding.",
"We thank the anonymous reviewers for their helpful comments and suggestions.",
"Part of Fig. 1 is licensed from dougwoods/CC-BY-2.0/flickr.com/photos/deerwooduk/682390157."
] | [
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"objective",
"result",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"objective",
"result",
"result",
"result",
"result",
"objective",
"abstain",
"other",
"other"
] |
[
"Extracting relational triples from unstructured text is crucial for large-scale knowledge graph construction.",
"However, few existing works excel in solving the overlapping triple problem where multiple relational triples in the same sentence share the same entities.",
"In this work, we introduce a fresh perspective to revisit the relational triple extraction task and propose a novel cascade binary tagging framework (CASREL ) derived from a principled problem formulation.",
"Instead of treating relations as discrete labels as in previous works, our new framework models relations as functions that map subjects to objects in a sentence, which naturally handles the overlapping problem.",
"Experiments show that the CASREL framework already outperforms state-of-the-art methods even when its encoder module uses a randomly initialized BERT encoder, showing the power of the new tagging framework.",
"It enjoys further performance boost when employing a pre-trained BERT encoder, outperforming the strongest baseline by 17.5 and 30.2 absolute gain in F1-score on two public datasets NYT and WebNLG, respectively.",
"In-depth analysis on different scenarios of overlapping triples shows that the method delivers consistent performance gain across all these scenarios.",
"The source code and data are released online 1 .",
"The key ingredient of a knowledge graph is relational facts, most of which consist of two entities connected by a semantic relation.",
"These facts are in the form of (subject, relation, object), or ( s, r, o ) , referred to as relational triples.",
"Extracting relational triples from natural language text is a crucial step towards constructing large-scale knowledge graphs.",
"Early works in relational triple extraction took a pipeline approach (Zelenko et al., 2003; Zhou et al., 2005; Chan and Roth, 2011).",
"It first recognizes all entities in a sentence and then performs relation classification for each entity pair.",
"Such an approach tends to suffer from the error propagation problem since errors in early stages cannot be corrected in later stages.",
"To tackle this problem, subsequent works proposed joint learning of entities and relations, among them are feature-based models (Yu and Lam, 2010; Li and Ji, 2014; Miwa and Sasaki, 2014; Ren et al., 2017) and, more recently, neural network-based models (Gupta et al., 2016; Katiyar and Cardie, 2017; Zheng et al., 2017; Zeng et al., 2018; Fu et al., 2019).",
"By replacing manually constructed features with learned representations, neural network-based models have achieved considerable success in the triple extraction task.",
"However, most existing approaches cannot effi-ciently handle scenarios in which a sentence contains multiple relational triples that overlap with each other.",
"Figure 1 illustrates these scenarios, where triples share one or two entities in a sentence.",
"This overlapping triple problem directly challenges conventional sequence tagging schemes that assume each token bears only one tag (Zheng et al., 2017).",
"It also brings significant difficulty to relation classification approaches where an entity pair is assumed to hold at most one relation (Miwa and Bansal, 2016).",
"Zeng et al. (2018) is among the first to consider the overlapping triple problem in relational triple extraction.",
"They introduced the categories for different overlapping patterns as shown in Figure 1 and proposed a sequence-to-sequence (Seq2Seq) model with copy mechanism to extract triples.",
"Based on the Seq2Seq model, they further investigate the impact of extraction order (Zeng et al., 2019) and gain considerable improvement with reinforcement learning.",
"Fu et al. (2019) also studied the overlapping triple problem by modeling text as relational graphs with a graph convolutional networks (GCNs) based model.",
"Despite their success, previous works on extracting overlapping triples still leave much to be desired.",
"Specifically, they all treat relations as discrete labels to be assigned to entity pairs.",
"This formulation makes relation classification a hard machine learning problem.",
"First, the class distribution is highly imbalanced.",
"Among all pairs of extracted entities, most do not form valid relations, generating too many negative examples.",
"Second, the classifier can be confused when the same entity participates in multiple valid relations (overlapping triples).",
"Without enough training examples, the classifier can hardly tell which relation the entity participates in.",
"As a result, the extracted triples are usually incomplete and inaccurate.",
"In this work, we start with a principled formulation of relational triple extraction right at the triple level.",
"This gives rise to a general algorithmic framework that handles the overlapping triple problem by design.",
"At the core of the framework is the fresh perspective that instead of treating relations as discrete labels on entity pairs, we can model relations as functions that map subjects to objects.",
"More precisely, instead of learning relation classifiers f ( s, o ) r , we learn relation-specific taggers f r ( s ) o , each of which recognizes the possible object(s) of a given subject under a specific relation; or returns no object, indicating that there is no triple with the given subject and relation.",
"Under this framework, triple extraction is a two-step process: first we identify all possible subjects in a sentence; then for each subject, we apply relation-specific taggers to simultaneously identify all possible relations and the corresponding objects.",
"We implement the above idea in CASREL , an end-to-end cascade binary tagging framework.",
"It consists of a BERT-based encoder module, a subject tagging module, and a relation-specific object tagging module.",
"Empirical experiments show that the proposed framework outperforms state-of-the-art methods by a large margin even when the BERT encoder is not pre-trained, showing the superiority of the new framework itself.",
"The framework enjoys a further large performance gain after adopting a pre-trained BERT encoder, showing the importance of rich prior knowledge in triple extraction task.",
"1. We introduce a fresh perspective to revisit the relational triple extraction task with a principled problem formulation, which implies a general algorithmic framework that addresses the overlapping triple problem by design.",
"2. We instantiate the above framework as a novel cascade binary tagging model on top of a Transformer encoder.",
"This allows the model to combine the power of the novel tagging framework with the prior knowledge in pre-trained large-scale language models.",
"3. Extensive experiments on two public datasets show that the proposed framework overwhelmingly outperforms state-of-the-art methods, achieving 17.5 and 30.2 absolute gain in F1-score on the two datasets respectively.",
"Detailed analyses show that our model gains consistent improvement in all scenarios.",
"Extracting relational triples from unstructured natural language texts is a well-studied task in information extraction (IE).",
"It is also an important step for the construction of large scale knowledge graph (KG) such as DBpedia (Auer et al., 2007), Freebase (Bollacker et al., 2008) and Knowledge Vault (Dong et al., 2014).",
"Early works (Mintz et al., 2009; Gormley et al., 2015) address the task in a pipelined manner.",
"They extract relational triples in two separate steps: 1) first run named entity recognition (NER) on the input sentence to identify all entities and 2) then run relation classification (RC) on pairs of extracted entities.",
"The pipelined methods usually suffer from the error propagation problem and neglect the relevance between the two steps.",
"To ease these issues, many joint models that aim to learn entities and relations jointly have been proposed.",
"Traditional joint models (Yu and Lam, 2010; Li and Ji, 2014; Miwa and Sasaki, 2014; Ren et al., 2017) are feature-based, which heavily rely on feature engineering and require intensive manual efforts.",
"To reduce manual work, recent studies have investigated neural network-based methods, which deliver state-of-the-art performance.",
"However, most existing neural models like (Miwa and Bansal, 2016) achieve joint learning of entities and relations only through parameter sharing but not joint decoding.",
"To obtain relational triples, they still have to pipeline the detected entity pairs to a relation classifier for identifying the relation of entities.",
"The separated decoding setting leads to a separated training objective for entity and relation, which brings a drawback that the triple-level dependencies between predicted entities and relations cannot be fully exploited.",
"Different from those works, Zheng et al. (2017) achieves joint decoding by introducing a unified tagging scheme and convert the task of relational triple extraction to an end-to-end sequence tagging problem without need of NER or RC.",
"The proposed method can directly model relational triples as a whole at the triple level since the information of entities and relations is integrated into the unified tagging scheme.",
"Though joint models (with or without joint decoding) have been well studied, most previous works ignore the problem of overlapping relational triples.",
"Zeng et al. (2018) introduced three patterns of overlapping triples and try to address the problem via a sequence-to-sequence model with copy mechanism.",
"Recently, Fu et al. (2019) also study the problem and propose a graph convolutional networks (GCNs) based method.",
"Despite their initial success, both methods still treat the relations as discrete labels of entity pairs, making it quite hard for the model to learn overlapping triples.",
"Our framework is based on a training objective that is carefully designed to directly model the relational triples as a whole like (Zheng et al., 2017), i.e., to learn both entities and relations through joint decoding.",
"Moreover, we model the relations as functions that map subjects to objects, which makes it crucially different from previous works.",
"The goal of relational triple extraction is to identify all possible (subject, relation, object) triples in a sentence, where some triples may share the same entities as subjects or objects.",
"Towards this goal, we directly model the triples and design a training objective right at the triple level.",
"This is in contrast to previous approaches like (Fu et al., 2019) where the training objective is defined separately for entities and relations without explicitly modeling their integration at the triple level.",
"Formally, given annotated sentence x j from the training set D and a set of potentially overlapping triples T j = { ( s, r, o ) } in x j , we aim to maximize the data likelihood of the training set D : | D | (cid:89) j =1 (cid:89) ( s,r,o ) T j p (( s, r, o ) | x j ) (1) = | D | (cid:89) j =1 (cid:89) s T j p ( s | x j ) (cid:89) ( r,o ) T j | s p (( r, o ) | s, x j ) (2) = | D | (cid:89) j =1 (cid:89) s T j p ( s | x j ) (cid:89) r T j | s p r ( o | s, x j ) (cid:89) r R \\ T j | s p r ( o | s, x j ) .",
"(3) Here we slightly abuse the notation T j .",
"s T j denotes a subject appearing in the triples in T j .",
"T j | s is the set of triples led by subject s in T j .",
"( r, o ) T j | s is a ( r, o ) pair in the triples led by subject s in T j .",
"R is the set of all possible relations.",
"R \\ T j | s denotes all relations except those led by s in T j .",
"o denotes a null object (explained below).",
"Eq.",
"(2) applies the chain rule of probability.",
"Eq.",
"(3) exploits the crucial fact that for a given subject s , any relation relevant to s (those in T j | s ) would lead to corresponding objects in the sentence, and all other relations would necessarily have no object in the sentence, i.e. a null object.",
"This formulation provides several benefits.",
"First, since the data likelihood starts at the triple level, optimizing this likelihood corresponds to directly optimizing the final evaluation criteria at the triple level.",
"Second, by making no assumption on how multiple triples may share entities in a sentence, it handles the overlapping triple problem by design .",
"Third, the decomposition in Eq.",
"(3) inspires a novel tagging scheme for triple extraction: we learn a subject tagger p ( s | x j ) that recognizes subject entities in a sentence; and for each relation r , we learn an object tagger p r ( o | s, x j ) that recognizes relation-specific objects for a given subject.",
"In this way we can model each relation as a function that maps subjects to objects, as opposed to classifying relations for ( subject , object ) pairs.",
"Indeed, this novel tagging scheme allows us to extract multiple triples at once: we first run the subject tagger to find all possible subjects in the sentence, and then for each subject found, apply relation-specific object taggers to find all relevant relations and the corresponding objects.",
"The key components in the above general framework, i.e., the subject tagger and relation-specific object taggers, can be instantiated in many ways.",
"In this paper, we instantiate them as binary taggers on top of a deep bidirectional Transformer BERT (Devlin et al., 2019).",
"We describe its detail below.",
"The encoder module extracts feature information x j from sentence x j , which will feed into subsequent tagging modules 2 .",
"We employ a pre-trained BERT model (Devlin et al., 2019) to encode the context information.",
"Here we briefly review BERT, a multi-layer bidirectional Transformer based language representation model.",
"It is designed to learn deep representations by jointly conditioning on both left and right context of each word, and it has recently been proven surprisingly effective in many downstream tasks (Zhong et al., 2019).",
"Specifically, it is composed of a stack of N identical Transformer blocks.",
"We denote the Transformer block as T rans ( x ) , in which x represents the input vector.",
"The detailed operations are as follows: h 0 = SW s + W p (4) h = T rans ( h 1 ) , [1 , N ] (5) where S is the matrix of one-hot vectors of sub-words 3 indices in the input sentence, W s is the sub-words embedding matrix, W p is the positional embedding matrix where p represents the position index in the input sequence, h is the hidden state vector, i.e., the context representation of input sentence at -th layer and N is the number of Transformer blocks.",
"Note that in our work the input is a single text sentence instead of sentence pair, hence the segmentation embedding as described in original BERT paper was not taken into account in Eq.",
"(4).",
"For a more comprehensive description of the Transformer structure, we refer readers to (Vaswani et al., 2017).",
"matrices.",
"3 We use WordPiece embeddings (Wu et al., 2016) to represent words in vector space as in BERT (Devlin et al., 2019).",
"Each word in the input sentence will be tokenized to fine-grained tokens, i.e. , sub-words.",
"Now we describe our instantiation of the novel cascade binary tagging scheme inspired by the previous formulation.",
"The basic idea is to extract triples in two cascade steps.",
"First, we detect subjects from the input sentence.",
"Then for each candidate subject, we check all possible relations to see if a relation can associate objects in the sentence with that subject.",
"Corresponding to the two steps, the cascade decoder consists of two modules as illustrated in Figure 2: a subject tagger; and a set of relation-specific object taggers.",
"Subject Tagger The low level tagging module is designed to recognize all possible subjects in the input sentence by directly decoding the encoded vector h N produced by the N -layer BERT encoder.",
"More precisely, it adopts two identical binary classifiers to detect the start and end position of subjects respectively by assigning each token a binary tag (0/1) that indicates whether the current token corresponds to a start or end position of a subject.",
"The detailed operations of the subject tagger on each token are as follows: p start s i = ( W start x i + b start ) (6) p end s i = ( W end x i + b end ) (7) where p start s i and p end s i represent the probability of identifying the i -th token in the input sequence as the start and end position of a subject, respectively.",
"The corresponding token will be assigned with a tag 1 if the probability exceeds a certain threshold or with a tag 0 otherwise.",
"x i is the encoded representation of the i -th token in the input sequence, i.e. , x i = h N [ i ] , where W ( ) represents the trainable weight, and b ( ) is the bias and is the sigmoid activation function.",
"The subject tagger optimizes the following likelihood function to identify the span of subject s given a sentence representation x : p ( s | x ) = (cid:89) t { start s,end s } L (cid:89) i =1 (cid:0) p ti (cid:1) I { y ti =1 } (cid:0) 1 p ti (cid:1) I { y ti =0 } .",
"where L is the length of the sentence.",
"I { z } = 1 if z is true and 0 otherwise.",
"y start s i is the binary tag of subject start position for the i -th token in x , and y end s i indicates the subject end position.",
"The parameters = { W start , b start , W end , b end } .",
"For multiple subjects detection, we adopt the nearest start-end pair match principle to decide the span of any subject based on the results of the start and end position taggers.",
"For example, as shown in Figure 2, the nearest end token to the first start token Jackie is Brown, hence the detected result of the first subject span will be Jackie R. Brown.",
"Notably, to match an end token for a given start token, we don't consider tokens whose position is prior to the position of the given token.",
"Such match strategy is able to maintain the integrity of any entity span if the start and end positions are both correctly detected due to the natural continuity of any entity span in a given sentence.",
"Relation-specific Object Taggers The high level tagging module simultaneously identifies the objects as well the involved relations with respect to the subjects obtained at lower level.",
"As Figure 2 shows, it consists of a set of relation-specific object taggers with the same structure as subject tagger in low level module for all possible relations.",
"All object taggers will identify the corresponding object(s) for each detected subject at the same time.",
"Different from subject tagger directly decoding the encoded vector h N , the relation-specific object tagger takes the subject features into account as well.",
"The detailed operations of the relation-specific object tagger on each token are as follows: p start o i = ( W rstart ( x i + v ksub ) + b rstart ) (9) p end o i = ( W rend ( x i + v ksub ) + b rend ) (10) where p start o i and p end o i represent the probability of identifying the i -th token in the input sequence as the start and end position of a object respectively, and v ksub represents the encoded representation vector of the k -th subject detected in low level module.",
"For each subject, we iteratively apply the same decoding process on it.",
"Note that the subject is usually composed of multiple tokens, to make the additions of x i and v ksub in Eq.",
"(9) and Eq.",
"(10) possible, we need to keep the dimension of two vectors consistent.",
"To do so, we take the averaged vector representation between the start and end tokens of the k -th subject as v ksub .",
"of object o given a sentence representation x and a subject s :",
"where y start o i is the binary tag of object start position for the i -th token in x , and y end o i is the tag of object end position for the i -th token.",
"For a null object o , the tags y start o i = y end o i = 0 for all i .",
"The parameters r = { W rstart , b rstart , W rend , b rend } .",
"Note that in the high level tagging module, the relation is also decided by the output of object taggers.",
"For example, the relation Work in does not hold between the detected subject Jackie R. Brown and the candidate object Washington.",
"Therefore, the object tagger for relation Work in will not identify the span of Washington, i.e., the output of both start and end position are all zeros as shown in Figure",
"2. In contrast, the relation Birth place holds between Jackie R. Brown and Washington, so the corresponding object tagger outputs the span of the candidate object Wash-ington.",
"In this setting, the high level module is capable of simultaneously identifying the relations and objects with regard to the subjects detected in low level module.",
"| D | (cid:88) j =1 (cid:88) s T j log p ( s | x j ) + (cid:88) r T j | s log p r ( o | s, x j ) + (cid:88) r R \\ T j | s log p r ( o | s, x j ) .",
"(12) where parameters = { , { r } r R } .",
"p ( s | x ) is defined in Eq.",
"(8) and p r ( o | s, x ) is defined in Eq.",
"(11).",
"We train the model by maximizing J () through Adam stochastic gradient descent (Kingma and Ba, 2014) over shuffled mini-batches.",
"Datasets and Evaluation Metrics We evaluate the framework on two public datasets NYT (Riedel",
"et al., 2010) and WebNLG (Gardent et al., 2017).",
"NYT dataset was originally produced by distant supervision method.",
"It consists of 1.18M sentences with 24 predefined relation types.",
"WebNLG dataset was originally created for Natural Language Generation (NLG) tasks and adapted by (Zeng et al., 2018) for relational triple extraction task.",
"It contains 246 predefined relation types.",
"The sentences in both datasets commonly contain multiple relational triples, thus NYT and WebNLG datasets suit very well to be the testbed for evaluating model on extracting overlapping relational triples 4 .",
"We use the datasets released by (Zeng et al., 2018), in which NYT contains 56195 sentences for training, 5000 sentences for validation, and 5000 sentences for test, and WebNLG contains 5019 sentences for training, 500 sentences for validation and 703 sentences for test.",
"According to different overlapping patterns of relational triples, we split the sentences into three categories, namely, Normal , EntityPairOverlap (EPO) and SingleEntityOverlap (SEO) for detailed experiments on different types of overlapping relational triples.",
"The statistics of the two datasets are described in Table",
"1. Following previous work (Fu et al., 2019), an extracted relational triple (subject, relation, object) is regarded as correct only if the relation and the heads of both subject and object are all correct.",
"For fair comparison, we report the standard micro Precision (Prec.), Recall (Rec.) and F1-score as in line with baselines.",
"4 Datasets such as ACE, Wiki-KBP have few overlapping triples in the sentences hence are not suitable for evaluating the performance of overlapping triple extraction.",
"Nonetheless, to validate the generality of the proposed framework, we also conduct supplemental experiments on these datasets along with the comparison of 12 recent strong baselines.",
"The results of the comprehensive comparison, which show consistent superiority of our model over most compared methods, can be found in Appendix C. Method NYT WebNLG Prec.",
"Compared Methods We compare our model with several strong state-of-the-art models, namely, NovelTagging (Zheng et al., 2017), CopyR (Zeng et al., 2018), GraphRel (Fu et al., 2019) and CopyR RL (Zeng et al., 2019).",
"The reported results for the above baselines are directly copied from the original published literature.",
"Note that we instantiate the CASREL framework on top of a pre-trained BERT model to combine the power of the proposed novel tagging scheme and the pre-learned prior knowledge for better performance.",
"To evaluate the impact of introducing the Transformer-based BERT model, we conduct a set of ablation tests.",
"CASREL random is the framework where all parameters of BERT are randomly initialized; CASRELLSTM is the framework instantiated on a LSTM-based structure as in (Zheng et al., 2017) with pre-trained Glove embedding (Penning-ton et al., 2014); CASREL is the full-fledged framework using pre-trained BERT weights.",
"Main Results Table 2 shows the results of different baselines for relational triple extraction on two datasets.",
"The CASREL model overwhelmingly outperforms all the baselines in terms of all three evaluation metrics and achieves encouraging 17.5% and 30.2% improvements in F1-score over the best state-of-the-art method (Zeng et al., 2019) on NYT and WebNLG datasets respectively.",
"Even without taking advantage of the pre-trained BERT, CASREL random and CASRELLSTM are still competitive to existing state-of-the-art models.",
"This validates the utility of the proposed cascade decoder that adopts a novel binary tagging scheme.",
"The performance improvements from CASREL random to CASREL highlight the importance of the prior knowledge in a pre-trained language model.",
"We can also observe from the table that there is a significant gap between the performance on NYT and WebNLG datasets for existing models, and we believe this gap is due to their drawbacks in dealing with overlapping triples.",
"More precisely, as presented in Table 1, we can find that NYT dataset is mainly comprised of Normal class sentences while the majority of sentences in WebNLG dataset belong to EPO and SEO classes.",
"Such inconsistent data distribution of two datasets leads to a comparatively better performance on NYT and a worse performance on WebNLG for all the baselines, exposing their drawbacks in extracting overlapping relational triples.",
"In contrast, the CASREL model and its variants (i.e., CASREL random and CASRELLSTM ) all achieve a stable and competitive performance on both NYT and WebNLG datasets, demonstrating the effectiveness of the proposed framework in solving the overlapping problem.",
"Detailed Results on Different Types of Sentences To further study the capability of the proposed CASREL framework in extracting overlapping relational triples, we conduct two extended experiments on different types of sentences and compare the performance with previous works.",
"The detailed results on three different overlapping patterns are presented in Figure",
"3. It can be seen that the performance of most baselines on Normal , EPO and SEO presents a decreasing trend, reflecting the increasing difficulty of extracting relational triples from sentences with different overlapping patterns.",
"That is, among the three overlapping patterns, Normal class is the easiest pattern while EPO and SEO classes are the relatively harder ones for baseline models to extract.",
"In contrast, the proposed CASREL model attains consistently strong performance over all three overlapping patterns, es-66.3 66 67.469.671.2 87.3 67 59.262.765.865.4 89.4 NYT WebNLG 20 40 60 80 100",
"pecially for those hard patterns.",
"We also validate the CASREL 's capability in extracting relational triples from sentences with different number of triples.",
"We split the sentences into five classes and Table 3 shows the results.",
"Again, the CASREL model achieves excellent performance over all five classes.",
"Though it's not surprising to find that the performance of most baselines decreases with the increasing number of relational triples that a sentence contains, some patterns still can be observed from the performance changes of different models.",
"Compared to previous works that devote to solving the overlapping problem in relational triple extraction, our model suffers the least from the increasing complexity of the input sentence.",
"Though the CASREL model gain considerable improvements on all five classes compared to the best state-of-the-art method CopyR RL (Zeng et al., 2019), the greatest improvement of F1-score on the two datasets both come from the most difficult class (N 5), indicating that our model is more suitable for complicated scenarios than the baselines.",
"Both of these experiments validate the superiority of the proposed cascade binary tagging framework in extracting multiple (possibly overlapping) relational triples from complicated sentences compared to existing methods.",
"Previous works have to explicitly predict all possible relation types contained in a given sentence, which is quite a challenging task, and thus many relations are missing in their extracted results.",
"In contrast, our CASREL model side-steps the prediction of relation types and tends to extract as many relational triples as possible from a given sentence.",
"We attribute this to the relation-specific object tagger setting in high level tagging module of the cascade decoder that considers all the relation types simultaneously.",
"In this paper, we introduce a novel cascade binary tagging framework (CASREL ) derived from a principled problem formulation for relational triple extraction.",
"Instead of modeling relations as discrete labels of entity pairs, we model the relations as functions that map subjects to objects, which provides a fresh perspective to revisit the relational triple extraction task.",
"As a consequent, our model can simultaneously extract multiple relational triples from sentences, without suffering from the overlapping problem.",
"We conduct extensive experiments on two widely used datasets to validate the effectiveness of the proposed CASREL framework.",
"Experimental results show that our model overwhelmingly outperforms state-of-the-art baselines over different scenarios, especially on the extraction of overlapping relational triples.",
"The authors would like to thank the anonymous referees for their valuable comments.",
"This work is supported by the National Natural Science Foundation of China (No.61976102, No.U19A2065)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"result",
"other",
"other"
] |
[
"News events are often associated with quantities (e.g., the number of COVID-19 patients or the number of arrests in a protest), and it is often important to extract their type, time, and location from unstructured text in order to analyze these quantity events.",
"This paper thus formulates the NLP problem of spatiotemporal quantity extraction, and proposes the first meta-framework for solving it.",
"This meta-framework contains a formalism that decomposes the problem into several information extraction tasks, a shareable crowdsourcing pipeline, and transformer-based baseline models.",
"We demonstrate the meta-framework in three domainsthe COVID-19 pandemic, Black Lives Matter protests, and 2020 California wildfiresto show that the formalism is general and extensible, the crowdsourcing pipeline facilitates fast and high-quality data annotation, and the baseline system can handle spatiotemporal quantity extraction well enough to be practically useful.",
"We release all resources for future research on this topic.",
"1 1 Introduction Events are often associated with quantities how many COVID-19 patients are on ventilators, how many people are injured during protests, or how large is the extent of a wildfire.",
"We often need to figure out the type of an event, and where and when it happened for these quantities for coherent discussion of public policy on sociopolitical events in rapidly evolving situations: 19 deaths is different from 19 recoveries; 19 deaths in a small city yesterday apparently describes a more severe situation than 19 deaths in the whole country last month.",
"However, until dedicated channels are established, these quantities are typically first reported on social media and local news articles, which then have to slowly make their way to some Work started while at the Allen Institute for AI 1 https://github.com/steqe DCT: Thursday, 08/27/2020 Title: Study Sessions, Dinners: 104 New USC Student Coronavirus Cases Text: LOS ANGELES , CA -The number of coronavirus cases confirmed among USC students continued rising Thursday, with the university announcing [104] new cases over the past four days Recognition: 104 Type: Confirmed cases Spatial Grounding: US California Los Angeles USC Temporal Grounding: [08/23/2020, 08/26/2020] DCT: Monday, 06/01/2020 Title: Black Lives Matter: 16 Organizations That Are Bailing Out Protestors Text: Police officers have arrested[thousands] of demonstrators Recognition: thousands Type: Arrests Spatial Grounding: US Temporal Grounding: Overall quantity ending on 06/01/2020 Figure 1: Given document creation time (DCT), title, and text, the STEQE problem is to do quantity recognition, typing, spatial grounding, and temporal grounding according to the proposed formalism (Sec. 2).",
"aggregate location for decision-makers to use.",
"This calls for a general framework to extract and analyze quantities associated with events, so that we can automatically summarize quantitative information from news streams, rapidly respond to emergencies, investigate incidents, and potentially combat misinformation through comparisons with trusted sources.",
"Prior work on events focused on extracting event mentions, attributes, and relationships (ACE, 2005; Chen and Ji, 2009; Do et al., 2011; UzZaman et al., 2013; Glava et al., 2014; Zhou et al., 2019; Chen et al., 2021), and paid little attention to quantities associated with those events, which presents an opportunity to perform targeted information extraction on these quantity events .",
"This paper studies spatiotemporal quantity extraction (STEQE): finding quantities of certain 2736 types and extracting their associated times and locations.",
"We develop a general meta-framework to help researchers overcome challenges and extend to new domains easily.",
"Specifically, the contributions of this meta-framework are: Task Formulation We draw on ideas from existing NLP tasks to create the first formalism that defines STEQE as four information extraction tasks: quantity recognition, typing, spatial grounding, and temporal grounding.",
"While each of these has analogues in the literature, our combination of them into a complete picture of quantity events is novel.",
"Annotation Collection We release a shareable and extensible crowdsourcing pipeline on CROWDAQ (Ning et al., 2020a) that facilitates fast and reliable data annotation.",
"We show how this pipeline facilitates fast and high-quality annotations for three sociopolitical events: the COVID-19 pandemic, Black Lives Matter (BLM) protests, and 2020 California wildfires.",
"These practical STEQE datasets are also released to foster future research.",
"Modeling We propose a T5 baseline model for its flexibility across tasks and easy domain transfer.",
"This model shows that, while the end-to-end STEQE problem remains challenging in all domains, temporal grounding is typically the most difficult task, pointing out a research focus next.",
"The STEQE problem aims to extract information about quantity events in text, consisting of four parts: determining which numerical expressions actually correspond to events (2.1), the type of the event that a quantity is referring to (2.2), where that event happened (2.3), and the temporal extent to which the quantity refers (2.4).",
"Note that for each of these subparts, there could have been other definition and formulation choices.",
"We describe our formalism's design choices, and discuss why they would lead to better-defined learning problems and more reliable data collection, along with their limitations and how to extend our formalism for more specialised applications.",
"Similar to named entity recognition (NER) (Tjong Kim Sang and De Meulder, 2003), quantity recognition is defined as a text span detection problem.",
"We discuss two questions regarding the definition of quantities : (1) how to distinguish between quantities and non-quantities; (2) how to define the span for quantities to avoid misalignment.",
"First, quantities are a special type of numbers that are associated with events , either in digits (e.g., 123 ) or in words (e.g., one hundred twenty three ).",
"Some non-quantity examples are:",
"1. Date and time: May 8, 2020 and 5:30 pm",
"2. Duration: 3 months and 60 years old",
"3. Part of an entity name: COVID-19 , Porsche 911 , and 502 Main Street Article words , a and an , require more attention.",
"When we say a man died, the a does mean 1 death, while in a large number of people died, the a itself does not have the meaning of 1, and we thus do not consider it a quantity.",
"Ordinal numbers can also indicate events, but their spatiotemporal extent can be understood differently: the fifth case in Seattle implies that there had been 5 cases, and the spatiotemporal extent of fifth can be that of the fifth case only, or all of the five cases.",
"Ordinal-number events are rare in our study, so comparing to the extra annotation requirement, we decide to consider ordinal numbers as non-quantities, although the definition is easily extensible to cover them in the future.",
"Second, we need to define the boundaries of these quantity spans.",
"For instance, in five cases in Seattle, should one label the text span of five or five cases ?",
"What about 4.8 billion and $4.8 billion ?",
"Similar to labeling an event using its predicate only, our choice is to keep the span minimal while keeping the numerical semantics: we will mark five (i.e., drop case ), 4.8 billion (i.e., keep billion ), and 4.8 billion (i.e., drop $ ) in these examples.",
"Minimising the span does not lose information about the quantityonly marking five in five cases does not prevent us from identifying its type, unit, and spatiotemporal extent in subsequent annotation tasks.",
"Below are some tricky cases, and quantities are in brackets.",
"1. Rate: [20 percent] of the tenants were infected , the positive rate is now [200] per [100,000] , [1000] tests per day",
"2. Approximation: [4 or 5] are missing",
"3. Range: the positive rate is [2 to 3 percent] / at least [2%] / at most [3%] 2.2 Quantity Typing Again, similar to NER, recognized quantities can have an associated type from a predefined set of 2737 classes.",
"2 A clear event type is important for subsequent spatiotemporal grounding, but some quantities can have multiple types, and some can have multiple interpretations for their spatiotemporal extent.",
"This work thus makes two design choices to mitigate these issues.",
"Enforce single-typing In this work, we allow quantities to have only one single type.",
"This ensures annotation quality since multiple types for a single quantity may complicate the spatiotemporal extent.",
"For instance, in [three] men were hospitalized 5 days after being tested positive, the time span of hospitalization and that of tested positive are different.",
"We enforce single-typing by providing an order of importance.",
"For instance, hospitalization is more important than tested positive, so the spatiotemporal extent of three will be that of hospitalizations.",
"Ignore rate and money quantities Rate and money quantities are excluded in all of our typing labels, because their spatiotemporal extent can be interpreted in different ways.",
"For instance, the spatiotemporal extent of a bill of $4.8 billion can be interpreted either as when and where this bill was passed, or as when and where the bill will be used; similarly, to define the time span of the rate quantity [20%] of the tenants were infected , we can either use the time span from the very first case to the last case that brought the infection rate from 0% to 20%, or use the time span when the infection rate was holding at 20%.",
"For applications where one needs to spatiotemporally ground rate and money quantities, one could extend our instructions to clarify the ambiguities above.",
"The spatial grounding problem of STEQE is to ground real-world events to a locale (see Fig. 7 in Appendix), avoiding complications in applications like human-robot interactions (e.g., turn left and go to the kitchen, and then pick up the fruit on the table ).",
"Thus we do not need to handle the nuances of relative spatial relationships like the kitchen is on our left and the table is in the kitchen.",
"We describe our formalism in terms of the format, granularity, and multi-location handling.",
"Title : Six COVID-19 cases emerge in South Portland Text : SOUTH PORTLAND, Maine -A facility for people with cognitive disabilities reports having [six] COVID-19 cases Spatial grounding for [six]: US Maine South Portland A facility for people with cognitive disabilities Figure 2: The desired spatial grounding annotation is the most specific location mentioned in the text that contains all individual cases of a quantity event.",
"Format An important decision for spatial grounding is the format : we can use natural language to describe the locale, select text spans from the original text, or select from a map directory.",
"In this work, we use a combination of all three for spatial grounding to balance between flexibility and consistency: we choose from a predefined set of questions to determine the country (U.S. vs non-U.S.) and state, use free text for the name of the city, and span selection for more granular locale information (e.g., a pork plant ).",
"We leave it for future work if one wants to extend to other countries, or if one can provide a detailed map directory.",
"Granularity We define spatial grounding annotation to be the most specific location mentioned in the text that contains all individual cases of a quantity event.",
"For instance, in Fig. 2, the title mentions 6 cases in South Portland, but later we will see that the 6 cases are all from a facility for people with cognitive disabilities.",
"The annotation should specify that facility instead of stopping at South Portland.",
"This design choice requires annotators to check the context in addition to the sentence containing the quantity, and is important for downstream tasks because it is likely that there are cases in South Portland but not in that facility.",
"Multi-location We handle events in multiple locations by broadening the granularity of the spatial location, as mentioned above.",
"However, there are cases where the same quantity is explicitly mentioned with two or more separate locations:",
"The 10 in both sentences above are associated with two cities, Seattle and Tacoma.",
"The semantics are also different: being shared by two locales, or the events from both locales combine to make this quantity.",
"In our pilot studies, we tried to consider 2738 these details in multi-location quantities, but found that they were very rare and crowd workers could not capture them reliably.",
"We thus decide to ignore these cases in this work and only allow crowd workers to select a single location.",
"The temporal grounding problem of STEQE is to ground each real-world quantity event to a single time span, which reduces the complexities in temporal semantics often encountered in prior datasets (Pustejovsky et al., 2003; Cassidy et al., 2014; O'Gorman et al., 2016; Ning et al., 2018a, 2020b) and improves practicality.",
"Format A time span consists of two time points, and the key is the format for time points.",
"In this work, we allow a time point to be UNKNOWN if the text is unclear.",
"For a specific time point, there are two general ways to describe it: (1) use absolute date and time (e.g., Feb 1st, 2021 ); (2) use relative time based on a reference time point T (e.g., 3 days before lockdown ).",
"We have chosen the first format in this study, and when a time point is unclear based on the text, we allow annotators to simply select Unknown .",
"The second method above is strictly more expressive, but also comes with many degrees of freedom: the reference point T can be either an absolute date and time T time or another event T event (e.g., lockdown ), and the relative time difference can be either a specific duration spec like 3 days before/after or a rough description rough like a few days before/after.",
"In our pilot studies allowing for T time + rough , T event + spec , or T event + rough , we found the T + method too flexible to achieve annotation agreement; in the meantime, using absolute date and time could reliably estimate those time spans in practice.",
"This is why we recommend the first format above.",
"Granularity Given the nature of news events, it is often enough to be specific up to days .",
"We define the time span of a quantity to be from the day of first event to the day of the last, 3 but this exact time span may not always exist in the text, so STEQE uses the best over-estimate of this gold time span based on information in the text (see Table 3).",
"3 If these events are durative, then accordingly, the time span should change to the day when the first event started to the day when the last event ended , although we did not find it necessary to point this out in our data collection guidelines for crowd workers.",
"This work also addresses common ambiguities.",
"(1) Some time expressions are not critical and thus less specific in text, e.g., March 2020, for which we will simply use the entire span of that range, e.g., [03/01/2020, 03/31/2020].",
"(2) For time expressions like mid September and end of 2020 , we choose the closest dates, e.g., 09/15 and 12/31/2020 .",
"(3) Depending on the actual publication date and the content of an article, there can be different interpretations for today, thus leading to a one-day disagreement among people regarding time expressions like yesterday or in the last three days.",
"We allow our annotators to use their best judgment in these cases.",
"Multi-span Similar to spatial grounding, we handle events in multiple time spans by broadening the granularity of the time span, as mentioned above, and as with spatial grounding, we do not label multiple time spans separately in rare cases like 10 arrests on Monday and Wednesday.",
"Overall quantity A special type of temporal grounding phenomenon is overall quantities .",
"Strictly speaking, this notion exists for spatial grounding as well (e.g., the overall COVID-19 case number around the world or the U.S.).",
"While humans easily agree on the spatial extent of these overall quantities, their time spans are often ambiguous, especially the start time.",
"For instance, in there have been [3 million] cases so far, the start time is supposed to be the beginning of the pan-demic, but people do not always agree on when that was.",
"The disagreement comes from (1) the pandemic started at different times in different regions of the world; (2) one may argue that the pandemic started either since the first confirmed case, or since the lockdown.",
"This debate over start-time is not an NLP problem, so instead of inventing a new mechanism to resolve this, we simply allow overall as a label for the start time of a quantity.",
"We have walked through the definition of the tasks in our STEQE framework, with discussions on various design choices.",
"Next we explain how to collect annotations via this framework in practice.",
"Table 1 shows some example annotations from our datasets.",
"We worked with NewsBreak Inc., a local news aggregation company, to obtain raw newswire texts from publicly available news outlets.",
"4 We then made use of NewsBreak's internal tools to determine the topic of these news articles, i.e., whether an article is about COVID-19, Black Lives Matter protests in 2020, or the 2020 California wildfires.",
"The data also comes with meta information including each article's source domain and publication time.",
"Altogether, we obtain 1M articles on COVID-19 between 01/01/2020 and 12/31/2020, 100k on protests from 05/22/2020 to 12/31/2020, and 90k on California fires from 08/01/2020 to 12/31/2020 as source articles.",
"Following the general guidelines in 2.2, we used the following domain-specific types in this study.",
"1. COVID-19 pandemic: deaths caused by COVID-19, deaths likely caused by COVID-19, recoveries, confirmed cases, tests, tested negative, hospitalizations, patients on ventilators, and in ICUs.",
"2. BLM protests: protests, participants, order maintainers, arrests, deaths, injuries, and shootings.",
"3. California fires: fires, physical measurements, people impacted, items impacted, and resources.",
"These domain-specific types can be very specific (see those for the COVID-19 pandemic) or generic (see those for California fires), which demonstrates the flexibility of our framework.",
"CROWDAQ (Ning et al., 2020a) is an open-source platform that standardizes data annotation pipelines and provides a customizable annotation interface, automated annotator qualification exams, progress monitoring, and annotation agreement monitoring.",
"5 CROWDAQ pipelines have four components: instruction, tutorial, exam, and main task: an annotator will read the instruction and tutorial, and then work on a set of multiple-choice exam questions.",
"CROWDAQ automatically checks their scores and assigns qualifications.",
"Qualified annotators will then be able to work on the main task.",
"For each of the four tasks defined in Sec. 2, we have designed CROWDAQ pipelines that are general enough to be used for annotating in all domains.",
"6 We release the CROWDAQ pipelines for public use.",
"7 3.4 Data statistics We first show statistics of our qualification exams in Table",
"2. We can see quantity recognition expectedly has the fewest hard questions and highest passing rate, and spatial and temporal grounding have more hard questions.",
"Note that typing for California fires seems harder than typing for the other two domains, likely due to our choice of more generic types for California wildfires.",
"We then launched main annotation tasks on Amazon Mechanical Turk (MTurk) that were available 5 http://www.crowdaq.com/ 6 The only change for a new domain is instructions and exams for quantity typing, which have to be domain-specific.",
"only to qualified workers.",
"We also required 3 different workers for each single annotation job and used majority voting to aggregate multiple workers' annotations.",
"Since quantity recognition is a relatively easy task and our quantity recognition system based on BERT (Devlin et al., 2019) for the COVID domain was reliable enough to be applied to other domains, we did not further collect quantity recognition data.",
"Table 3 and Table 6 (Appendix) show more statistics of these datasets.",
"Note that we did not enforce full annotation for all quantities (i.e., one quantity may only receive typing annotations, and another may only receive spatial annotations) to cover more documents (Ning et al., 2019a).",
"Within those reported in Table 3, 500 quantities in each domain are fully labeled with both typing and spatiotemporal extent, and we use these as our test sets.",
"We paid $0.05 for each job in quantity recognition, and $0.15 for those in typing, spatial grounding, and temporal grounding; in the COVID-19 data collection, the average hourly pay of the top 5 annotation contributors was $25 (typing), $13 (spatial grounding), and $12 (temporal grounding).",
"In total, the cost of 3 datasets was $11k (including 20% overhead paid to MTurk).",
"We developed our CROWDAQ pipeline for COVID-19 and applied it on other domains.",
"When we received news articles in BLM protests and California wildfires from NewsBreak Inc., it only took us about 2 weeks to obtain the annotations used in this work, including designing domain-specific typing instructions and exams, launching tasks to MTurk, and waiting for crowd workers to finish.",
"This fast and reliable data collection is appealing for responding to emerging events in the future.",
"Quantity recognition is a typical span selection problem and we use the standard token classification model based on BERT (large, cased) (Devlin et al., 2019) that comes with HuggingFace (Wolf et al., 2020).",
"For typing, spatial, and temporal grounding, we use the T5-large language model (Raffel et al., 2020) for its flexibility across tasks and easy domain transfer.",
"We format data from each task to fit into T5's sequence to sequence (seq-to-seq) nature.",
"Specifically, for each quantity, the input sequence to T5 is the string of the previous 3 sentences, the current sentence with a special marker token right before the quantity span, the next 3 sentences, the title, and document creation time (DCT).",
"For typing, the output sequence is a single token representing each label mapped from a reserved vocabulary.",
"For spatial grounding, the output sequence is the location names from the highest hierarchy to the lowest ended by an end-of-sentence (EOS) marker.",
"For temporal grounding, the output sequence is the start time followed by the end time.",
"Both times are either unknown or a date string in ISO 8601 format (e.g., 2021-01-15 ).",
"We view the start time of an overall quantity as unknown .",
"To get complete date predictions, we enforce the decoding length to be at least 12 and use a date parser to find unknowns or dates.",
"In our evaluation of quantity recognition using the aforementioned BERT model on a random set of 300 sentences (100 from each domain), we find the precision 99% for all domains, and the recall 95% (COVID), 87% (BLM), and 87% (Fire).",
"The recall is slightly lower because of poor performance on article words ( a and an ).",
"However, since most missed quantities are not associated with event types that we are interested in (e.g., [a] post of-fice or [a] comment ), the adjusted recall is 98% (COVID), 94% (BLM), and 93% (Fire) if we do not consider those irrelevant quantities.",
"Table 4 shows system performances on typing , spatial , and temporal grounding on extracted quantities.",
"Our test set in each domain consists of 500 fully annotated quantities.",
"The rest of the data is split into 80% for training and 20% for development, that we use to acquire the learning rate (5e-3) and batch size (32).",
"We compare T5 with a naive method, which always predicts the majority type in each domain for typing, the location mention closest to the quantity in text for spatial grounding, 8 and overall quantity ending on DCT for temporal grounding.",
"For spatial grounding , we report two exact match (EM) scores, up to the state-level and city-level, respectively.",
"For temporal grounding , we report the accuracy for judging whether a quantity is an overall quantity ending on DCT (Binary in Table 4), and two EM scores for cases where the gold start time is a specific date 8 This assumes world knowledge of geo-hierarchies, e.g., L.A. is in California.",
"(S-N for Start-Nontrivial) and where the end time is not DCT (E-N for End-Nontrivial).",
"T5 (in-domain) On quantity typing, T5 improves by a large margin over the naive baseline in all domains.",
"The naive baseline performs reasonably well on spatial grounding at the state level (82-92% EM-state across three domains), but often fails to provide more granular information at the city level (58-74% EM-city).",
"This is expected because a city mentioned close to the quantity does not necessarily mean that the quantity is for the city.",
"9 This phenomenon also varies across domains: BLM protests were in a few major cities, the EM-city score of the naive method is thus relatively high (74%), while for Calfornia wildfires, there were more cities to choose from, leading to a low EM-city of 58%.",
"In contrast, T5 can produce more granular information at the city level, and maintain a relatively stable score across domains (70-81% EM-city).",
"As for temporal grounding, due to the nature of news articles, the naive baseline that treats all quantities as an overall quantity ending on DCT yields reasonably good performances in all domains; but for quantities with a non-trivial start time or end time, the naive baseline largely fails.",
"T5 (all domains) We also combine the training data for spatiotemporal grounding from all domains and train a single T5 system (but keep T5 in-domain systems for typing), which achieves the best scores for almost all metrics in Table 4.",
"One outlier is the Fire domain, where the Binary score 9 The State Department of Public Health in Springfield reports a total case of [268]. is a quantity for the state.",
"for temporal grounding drop, probably due to most temporal annotations being overall quantities.",
"This suggests that spatiotemporal phenomena can be generally transferred across different domains.",
"Finally, the end-to-end column in Table 4 shows how many of these quantities have received correct predictions on typing, spatial grounding (based EM-city), and temporal grounding (based on Bi-nary).",
"The reported performance does not count for quantities that are not recognized, so we view this as the precision of the system.",
"We see that the naive baseline has very low performance due to errors propagated at each step, while with this framework, T5 is trained to produce significantly better results.",
"Note that depending on the use case, one can simply collect more training data, or focus on only a few important event types, to further improve the end-to-end performance.",
"Existing NLP works on events have focused on detection (e.g., detecting LIFE and BUSINESS events; ACE (2005)), common sense (e.g., Rashkin et al. (2018); Sap et al. (2019); Zhang et al. (2020a)), and relationships (e.g., coreferential Chen and Ji (2009), temporal UzZaman et al. (2013), causal Do et al. (2011), and parent-child relations Glava et al. (2014)).",
"There is also a line of recent works specifically on temporal semantics: time expression extraction and normalization (Laparra et al., 2018), temporal relation extraction (Ning et al., 2018a, 2019b, 2020b), temporal common sense (Zhou et al., 2019, 2020), temoral slot filling (Sur-deanu, 2013), and timeline construction (Do et al., 2012; Ning et al., 2018b; Li et al., 2019).",
"These tasks may help understanding the temporal aspects of events in general, but they cannot directly associate temporal values with quantities, and calls for a dedicated framework such as STEQE.",
"Prior works on quantities either focus on math calculations (Roy et al., 2015; Roy and Roth, 2018) or common sense reasoning (e.g., mass distribution of animals; Elazar et al. (2019)), and not on quantity events and the associated spatiotemporal extent studied in this work.",
"Existing works on spatial semantics have focused on natural language navigation (Chen et al., 2019; Kim et al., 2020), human-machine interaction (Landsiedel et al., 2017; Roman Roman et al., 2020), dialogue systems (Udagawa et al., 2020), and clinical analysis (Kordjamshidi et al., 2015; Datta and Roberts, 2020).",
"Works on geocoding (Gritta et al., 2018; Kulkarni et al., 2020) map spatial mentions to coordinates, which can be applied to our work for finer geolocation mapping.",
"Zhang and Choi (2021) proposes a QA dataset that considers time and location of the question when judging answer correctness, which may benefit from our information extraction framework.",
"A recent work from Zong et al. (2020), which extracts COVID-19 related events from tweets, is closely related to our work.",
"Besides that they worked on tweets instead of news articles, the key differences are: (1) instead of span selection used in Zong et al. (2020), we propose formalisms deeper into the spatiotemporal extent of quantity events and capture more nuances in spatiotemporal semantics; (2) we show that our STEQE framework generally applies to multiple domains and not only for the COVID-19 pandemic; (3) we release our entire data collection pipeline on CROWDAQ for public use and extension.",
"As 5 shows, the performance bottleneck of STEQE is mainly at temporal grounding: with almost perfect quantity recognition and very good typing and spatial grounding results, temporal grounding performance is typically much lower than the other tasks.",
"While typing and spatial grounding are ready for practical research into few-and zero-shot settings along the lines of what is done in entity typing (Zhou et al., 2018; Obei-dat et al., 2019; Zhang et al., 2020b), temporal grounding still requires more investigation even in in-domain settings.",
"Why is temporal grounding so challenging?",
"First, news articles tend to mention many overall quantities ending on publication time, leading to imbalanced datasets.",
"For instance, 86% in Fire fall into this category, leaving little training data for other quantities; in contrast, this number is only 32% in BLM, and the S-N and E-N scores are much higher in BLM than those in Fire.",
"Second, temporal grounding often requires reasoning, an effect known to be difficult in many works on temporal semantics (Ning et al., 2020b; Zhou et al., 2021).",
"For instance in Fig. 4, to figure out the time span of 80, we need to understand that (1) it happened on Sunday (2) the Sunday is a Sunday in the past instead of in the future, and (3) it is most likely the most recent Sunday instead of earlier ones.",
"Another direction to improve on STEQE is to aggregate from multiple articles, given that the same quantity or similar quantities are typically covered by multiple sources.",
"Cross-document event coreference has many unique difficulties (e.g., see Upad-hyay et al. (2016); Bugert et al. (2020)), but knowing the quantity event type, location, and time span may make it relatively easy to find coreference to strengthen one's belief in its prediction, or demote outliers that are likely wrong predictions.",
"The proposed STEQE framework may also be used to detect misinformation and perhaps in social science studies too.",
"For instance, we have anecdotes where a website mistakenly reported Vir-ginia's COVID-19 case number on Apr 2, 2020 to be 17k, while the correct number was 1.7k; we also found signs that news agencies might have mentioned case numbers in New York city less frequently after a sharp increase, but turned to report case numbers in New Jersey in April 2020.",
"These social science analyses are beyond the scope of this work, but the examples above point to interesting potential uses of these information extraction systems.",
"Many important news events are associated with quantities.",
"With practicality in mind, we dive deep into the semantics of quantity events and propose a meta-framework for spatiotemporal quantity extraction: we formulate the problem as four information extraction tasks which lead to quick and reliable data annotation via crowdsourcing; we also build a T5 baseline to study the difficulties of the task and discuss transfer learning opportunities.",
"We use this meta-framework to build datasets on three separate sociopolitical events: the COVID-19 pandemic, BLM protests, and California fires.",
"Our meta-framework is shown to be readily extensible to different domains of quantity events, an appealing feature for quick response to future events.",
"The new datasets we collect as examples of this framework can also directly contribute to future studies on spatiotemporal quantity extraction."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"other",
"result",
"objective",
"objective",
"objective",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective"
] |
[
"In Community-based Question Answering system(CQA), Answer Selection(AS) is a critical task, which focuses on finding a suitable answer within a list of candidate answers.",
"For neural network models, the key issue is how to model the representations of QA text pairs and calculate the interactions between them.",
"We propose a Sequential Attention with Keyword Mask model(SAKM) for CQA to imitate human reading behavior.",
"Question and answer text regard each other as context within keyword-mask attention when encoding the representations, and repeat multiple times(hops) in a sequential style.",
"So the QA pairs capture features and information from both question text and answer text, interacting and improving vector representations iteratively through hops.",
"The flexibility of the model allows to extract meaningful keywords from the sentences and enhance diverse mutual information.",
"We perform on answer selection tasks and multi-level answer ranking tasks.",
"Experiment results demonstrate the superiority of our proposed model on community-based QA datasets.",
"Answering selection(AS) is one of the most fundamental challenges in community-based question answering(CQA) services.",
"Given a question and a list of candidate answers, its aim is to choose the most matching one to the question.",
"During this process of matching questions and answer candidates, how to encode the question and answer(QA) into meaningful and semantic representations impacts on the results directly.",
"Earlier conventional statistic methods are normally based on feature engineering and resource toolkits.",
"Though these methods are easy in implementation, they require extra efforts and handcrafted features(Heilman and Smith, 2010; Tymoshenko and Moschitti, 2015).",
"Recently, with the development of neural network, deep learning based models attract much attention in various tasks(Krizhevsky et al., 2012; Sutskever et al., 2014).",
"In question answering field, the convolutional neural networks(CNNs)(Yu et al., 2014; Hu et al., 2014; Severyn and Moschitti, 2015) and recurrent neural networks(RNNs)(Wang and Nyberg, 2015; Feng et al., 2015) are widely employed to convert the question and answer text into vectors and define a feed-forward multi-layer percep-tron to compute the interactions between them.",
"These models construct sentences in an end-to-end fashion with less manual involvement.",
"To capture fine-grained features, on the one hand, some works are concerned with matching QA pairs relationship in a more complex and diverse way, e.g., CNTN(Qiu and Huang, 2015) and MV-LSTM(Wan et al., 2016).",
"On the other hand, latent representation models aim to jointly learn lexical and semantic information from QA sentences and influence the vector generation directly, e.g., attention mechanism(Bahdanau et al., 2015).",
"Attention mechanism learns attention weights of each words pairs between QA sentences.",
"Afterwards it can calculate the weighted sum of hidden states over all time steps(dos Santos et al., 2016).",
"This approach has shown promising results, while challenges still exist.",
"For example, questions and answers in CQA services are generally long sentences, as such it is still difficult to compress all information into a fixed-length vector.",
"To solve this problem, Sha et al. (2018) proposes co-attention view which brings improvement, and Zhang et al. (2018) further proposes a two-step attention to build dynamic question vectors based on various answer words.",
"These kinds of methods usually require more parameters to learn representations.",
"More importantly, when computing attention weights, every words in QA pairs are involved.",
"This word-to-word pattern takes meaningless noise into consideration, such as informal language usage or text irrelevant to the question.",
"To alleviate this problem, Chen et al. (2018) proposes a context-aligned model to align phrase in QA relying on overlapped words and Stanford Core NLP tools 1 .",
"Inspired by co-attention, we extend it to sequential style to learn better representations and try to extract useful keywords with less parameters and resource toolkits.",
"In this work, we propose a Sequential Attention with Keyword Mask(SAKM) model for answer selection task.",
"We encode sentences similar to human reading behavior.",
"When generating the question, our model refers to the answer and combines the mutual information.",
"It is the same processing for producing answer representations.",
"So when encoding a question, answer text is used as context and vice versa.",
"We term this co-attention view as one hop.",
"Afterwards we repeat this process several times(hops) in a sequential style.",
"As such QA pairs review each other recurrently to remind of mutual information and refine the sentence representations to be more precise across hops.",
"Besides, the Keyword Mask modifies the attention mechanism such that the attention is computed over keywords instead of all words in the QA pair.",
"So only keywords in the long context are extracted at each time step.",
"The contributions in this paper are three folders:",
"1) We extend attention mechanism to sequential structure, so the question and answer review each other recurrently to improve the sentence representations.",
"2) Different from standard soft attention, we propose sequential attention with keyword mask(SAKM) model.",
"Besides, our model focuses on the significant words and filters other meaningless data.",
"3) We analyse the proposed SAKM model not only on classical answer selection tasks and but also multi-level answer ranking tasks.",
"Experiment results show that our model tends to encode more rich semantic representations with less parameters.",
"In Community-based Question Answering(CQA) services, since normally there exists a large number of question and answer pairs in repository, answer selection(AS) is a critical task, which focuses",
"on finding a suitable answer within a list of candidate answers.",
"As for traditional methods, feature engineering is a core work, but also a time-consuming and laborious task.",
"BM25(Robertson et al., 1994) calculates relevance with item frequency, while language models(Ponte and Croft, 2017; Zhai and Lafferty, 2004) use the maximum likelihood of a word estimated from the question.",
"Translation-based language models(Jeon et al., 2005; Xue et al., 2008) further improve.",
"Considering the syntactic structure, some prior works(Heilman and Smith, 2010; Tymoshenko and Moschitti, 2015) convert the sentence into a tree structure by dependency parsing.",
"Additionally, linguistics resources such as WordNet are utilized to enhance lexical features.",
"Classification models like chain Conditional Random Fields have been used to match the questions and an-swers(Kiritchenko et al., 2014).",
"Recently, neural network based models have shown effectiveness in various fields, such as computer vision(Krizhevsky et al., 2012) and natural language processing(Kim, 2014).",
"Different from aforementioned approaches, deep neural architec-tures(Hu et al., 2014; Wang and Nyberg, 2015) map each word into an embedding space, and compress the whole sentence into a low dimension vector.",
"Then a similarity function is defined to calculate the interactions between QA pairs.",
"Closer vectors in the embedding space represent much more relevant text.",
"To model fine-grained features, Qiu and Huang (2015) combines CNN with neural tensor net-work(NTN) to learn complex interactions between QA pairs.",
"But NTN increases a lot of parameters and costs more runtime and memory.",
"MV-LSTM proposed by Wan et al. (2016) uses bi-direction LSTM to generate a positional representation at each time step.",
"Subsequently these representations from questions and answers are fed into a tensor layer.",
"Shen et al. (2017) learns word representations in an embedding space by a translation matrix, and calculates relevance of each word pair in QA to compute a similarity matrix.",
"Then CNN maps this matrix to a score scalar.",
"Recently, (Tay et al., 2018b) applies the hyperbolic distance function to model the relationship between QA.",
"Other latent representation models construct interactions between QA when encoding sentences.",
"More mutual information is extracted to learn a better latent representation.",
"Miao et al. (2016) pro-Figure 1: The architecture of 3 hops SAKM model.",
"For simplification, we omit the lines in Sentence Representation Layer of Question part.",
"poses neural variational inference network.",
"Wang et al. (2016) takes question context into consideration in RNN cell of answer network with gate mechanism.",
"Yin et al. (2016) and dos Santos et al. (2016) propose attention-based CNN models to add attention weight matrix between a QA pair as a feature map.",
"Additionally, Zhang et al. (2017) extends attention weight matrix to 3D tensor including more diverse information.",
"Sha et al. (2018) proves co-attention view can significantly outperform the single attention.",
"Further more, Zhang et al. (2018) constructs two-step attention to obtain question aware vectors based on various words in an answer sentence.",
"Given a question, which may contain one or more clauses, it can be denoted as Q = ( q 1 , q 2 , ..., q n ) .",
"Similarly, an answer can be denoted as A = ( a 1 , a 2 , ..., a n ) .",
"A + and A represent a positive answer and a negative answer, respectively.",
"Fig. 1 describes the overall architecture of the proposed Sequential Attention with Keyword Mask(SAKM) model(in this figure, we use three hops as illustra-tion).",
"We extend our network in a sequential style.",
"For each hop, a serial of stacked layers are constructed for the questions and the answers.",
"Firstly the questions and the answers need to be fed into the embedding layer and each word in sentences corresponds to an one-hot vector.",
"Given a look-up table, each word is converted into an embedding space.",
"The index of the low dimensional vector in the look-up table is the same as one-hot vector.",
"We denote the embedding vectors of the QA pairs as Q emb = ( x 1 , x 2 , ..., x n ) R d | Q | and A emb = ( y 1 , y 2 , ..., y n ) R d | A | , where d is the embedding size, | Q | and | A | denote the length of the question and answer respectively.",
"In order to mitigate the risk of overfitting, we employ dropout layer to randomly ignore different part of neurons in different hops during training.",
"This process learns better representations of local regions and leads to better generalization during testing.",
"Gated Recurrent Unit(GRU) To encode a sentence into a single vector, we choose gated recurrent unit(Cho et al., 2014) and construct Q-GRU and A-GRU for the questions and answers, respectively.",
"Given an input sentence S = ( s 1 , s 2 , ..., s n ) R d | S | , GRU handles each word recurrently and at time step t the operation is defined as follow: r t = ( W r s t + U r h t 1 + b r ) z t = ( W z s t + U z h t 1 + b z ) h t = z t (cid:12) h t 1 + (1 z t ) (cid:12) h t (1) where h t = tanh ( W h s t + U h ( r t (cid:12) h t 1 ) + b h ) (2) In the above equations, W r , W z , W h R m d and U r , U z , U h R m m are parameters in the neurons.",
"m is the dimension size of the hidden states.",
"b r , b z and b h R m are bias.",
"is sigmoid function and (cid:12) means element-wise product.",
"Attention Mechanism During the GRU encoding process, an attention mechanism helps to combine context information with the current hidden states.",
"For the standard soft attention mecha-nism(Bahdanau et al., 2015), as demonstrated in Fig. 2, the hidden state at time t computes attention weights with all of the context hidden states, and then obtains alignment scores after softmax operation.",
"This mechanism takes all of the words in the context into consideration, while our model expects to extract some keywords to the current word and ignore other meaningless or noisy segments.",
"The keyword mask relies on the attention weights and reserves top percent of words to account for alignment scores.",
"It can be formulated as: e ij = v T tanh( W a [ h i ; h j ]) e maskij = f mask top k ( e ij , inf ) a ij = exp( e maskij ) (cid:80) | S | k =1 exp( e maskik ) c i = | S | (cid:88) j =1 a ij h j c i = tanh( W c [ c i ; h i ]) (3) where [; ] is the operator of concatenation, and f mask top k denotes the function that the top percent of values are reserved while others are masked as value inf .",
"So these masked positions in a ij become 0 after softmax operation, which represents no influence to the current hidden state h i .",
"Fig. 3 shows the details of this attention mechanism.",
"We will discuss the keywords percentage in more detail in Section 5.2.",
"After obtaining the representation of hidden neuron at time step t , we concat c i with next word as input corresponding to the dotted line described in Fig. 3.",
"When the whole sentence is processed, the final hidden output does not become representation directly because it loses much information about the beginning of the sentence.",
"Instead, average operation over all hidden outputs is taken to produce the final representation.",
"Sequential Extension As shown in Fig. 1, A-GRU regards the question sentences as context to compute attention and representations.",
"Likewise, Q-GRU reviews the answer contents to tune the representations.",
"We extend this process in a sequential style to capture features and enhance information both from question text and answer text.",
"All of the parameters across hops are shared.",
"For each hop, the vectors of QA pairs interact and improve.",
"Compared to single direction attention or single hop attention, our model gets much more flexibility.",
"It is capable of updating the representations towards the correct direction with the guide of a loss function and gradients across hops.",
"We rename the Eq.",
"3 as MaskAttention () , as such the equations of the sequential extension is defined as: Q h = MaskAttention ( Q hemb , A h 1 ) A h = MaskAttention ( A hemb , Q h ) (4) where h is the current number of hop.",
"The model iteratively updates the joint representations of the question and answer pair and obtains different outputs across hops.",
"Sentence Representation In the sentence representation layer, Q hrepresentation and A hrepresentation denote the final representation outputs of QA pairs at the hop h .",
"Since Q h and A h only contain the information extracted at hop h , more meaningful content would be lost after more hops.",
"But we expect to remember it from the beginning hops.",
"To convey more information across hops, we do not simply take Q h and A h as the sentence representations.",
"Instead, all of the previous outputs from MaskAttention are involved.",
"They can be calculated as: Q hrepresentation = 1 h h (cid:88) j =1 Q h A hrepresentation = 1 h h (cid:88) j =1 A h (5) 3.3 Similarity Calculation Finally, we design a weighted loss strategy to compute the relevance and the loss value between QA pairs.",
"For each hop, we have a pair of QA sentence representations and pass them through a similarity function described as: s h ( Q, A ) = Q hrepresentationT A hrepresentation || Q hrepresentation || 2 || A hrepresentation || 2 (6) where s h ( Q, A ) is the matching score between QA pairs at hop h .",
"|||| 2 means euclidean distance.",
"As for the loss function during training, given a question, we use pair-wise margin-based ranking loss for a triple ( Q, A + , A ) .",
"Thus the mathematical expression is: L h ( Q, A + , A ) = max(0 , m ( s h ( Q, A + ) s h ( Q, A ))) (7) where m is the predefined margin.",
"Since we expect that vector representations generated from posterior hops are more precisely than the ones produced from prior hops, relatively small tolerance to the risk of matching incorrect QA pairs is accepted for posterior hops.",
"Therefore, the loss values take increasing weights across hops.",
"We denote r h as loss weights.",
"The objective loss function can be defined as: L ( Q, A + , A ) = H (cid:88) h =1 r h L h ( Q, A + , A ) (8) 4 Experimental Study In this section, we test the proposed model on classical answer selection task and also multi-level answer ranking task to validate the model's effectiveness 2 .",
"Dataset & Implementation Details In this task, we use a community-based question answering dataset YahooCQA provided by Tay et al. (2017).",
"It is an open-domain community forum, and the dataset contains 142,627 QA pairs.",
"Sentences in YahooCQA are generally long and noisy.",
"We follow the preprocessing in their work without extra process.",
"Four negative answers are generated for a question using Lucene .",
"Table 1 demonstrates the statistics of YahooCQA.",
"For our model, we tune the hidden size to 300, and the numbers of GRU layers for modeling questions and answers are both 1.",
"Dropout is 0.5 and word embedding is pre-trained by skip-gram model.",
"For the Sequential Extension layer, the number of hops is 3.",
"For the Similarity Calculation, margin is 0.1 and weights for all hops are 2 Our code is available at https://github.com/ sheep-for/question_answer_matching set as (0.2, 0.3, 0.5).",
"Weights are set to constants because we promise to put more weight on later hops and keep reasonable tolerance for prior ones.",
"Batch size is 20.",
"All of the parameters are optimized by Back Propagation and Momentum.",
"Baselines We compare our model against several advanced deep neural network models.",
"CNTN(Qiu and Huang, 2015), NTN-LSTM, HD-LSTM(Tay et al., 2017) and HyperQA(Tay et al., 2018b) are interaction focused methods, while AP-CNN, AP-BiLSTM(dos Santos et al., 2016), QRNN(Bradbury et al., 2017), CTRN(Tay et al., 2018a) and two-step attention(Zhang et al., 2018) are latent representation models.",
"Additionally, we choose two traditional methods Random Guess and BM25(Robertson et al., 1994).",
"Evaluation Metrics For YahooCQA, we use Precision@1(P@1) and Mean Reciprocal Rank(MRR) to evaluate our model and the metrics are defined as: P @1 = 1 NN (cid:88) i =1 ( r ( A + ) = 1) MRR = 1 NN (cid:88) i =1 1 r ( q i ) (9) where is indicator function, N is the number of all queries and r ( q i ) is the rank of the first correct answer to question q i .",
"Experiment Results The results are shown in Table 2.",
"Firstly, it is observed that deep neural network models outperform traditional models.",
"Most latent representation models obtain better results than interaction focused models, indicating that earlier interactions when encoding sentences produces semantic vectors.",
"Most importantly, the proposed SAKM model achieves best results on both P@1 and MRR.",
"Our basic SA model outperforms two-step attention model by 3.8% in terms of P@1 and 1.8% in terms of MRR, which shows that our sequential extension structure is effective.",
"Furthermore, our SAKM model outperforms HyperQA model by 1.0% in terms of P@1 and 1.3% in terms of MRR.",
"Since HyperQA is an interaction focused model which adopts the hyperbolic distance function to model the relevance between QA.",
"we can combine it with our SAKM to obtain better performance in further study.",
"The experiment results agree with our intuition that extracting meaningful keywords in attention mechanism helps to generate more precise representations.",
"Relevant relationship in answer selection datasets is binary, only including relevance and irrelevance.",
"However, in the real CQA applications, it is difficult to verify whether the answers are completely correct or not.",
"This scenario has caused a challenge called multi-level answer ranking(Liu et al., 2018).",
"These answers for one question are annotated as several levels corresponding to the thumb-up numbers.",
"Dataset & Implementation Details To test the proposed model in multi-level answer ranking task, we choose the dataset ZhihuCQA provided by Liu et al. (2018).",
"Zhihu 3 is a popular and professional Chinese QA community platform with more than millions of users and QA pairs.",
"Table 1 describes the statistics of ZhihuCQA.",
"For each question, top five answers are selected and ranked by the thumb-up numbers.",
"We replace margin based ranking loss with RankNet(Burges et al., 2005).",
"In this task, we use the jieba 4 toolkits for word segmentation and tune the hidden size to 200, and the numbers of GRU layers for modeling questions and answers are both 2.",
"Other settings are the same as YahooCQA.",
"3 https://www.zhihu.com/ 4 https://github.com/fxsjy/jieba Baselines We compare our model against available advanced methods in (Liu et al., 2018).",
"ARC-II learns hierarchical pattern based on ARC-I(Hu et al., 2014).",
"Skip-Thoughts model(Kiros et al., 2015) trains an encoder-decoder model to construct sentence vectors.",
"Attentive LSTM, ABCNN and Compare-Aggregate(Wang and Jiang, 2017) are attention-based models and Rewrite+Rank is based on generative adversarial network.",
"Evaluation Metrics In this task, since the labels for an answer are not binary, we choose normalized discounted cumulative gain(NDCG) and expected reciprocal rank(ERR) for evaluation.",
"( o 1 , o 2 , ..., o M ) denotes the predicted orders of answers to a question.",
"NDCG is defined as: NDCG = DCG iDCG DCG = M (cid:88) i =1 2 o i 1 log(1 + i ) (10) where iDCG is the ideal DCG calculated from the correct orders ( l 1 , l 2 , ..., l M ) .",
"ERR is defined as: ERR = M (cid:88) r =1 R r r r 1 (cid:89) i =1 (1 R i ) R i = 2 o i 1 2 o m (11) where o m is the maximum of degree values.",
"Experiment Results Table 3 reports the results on ZhihuCQA.",
"Similar to YahooCQA, attention-based models perform better than other baselines.",
"Our SA model outperforms Rewrite+Rank by 3.6% in terms of NDCG and 3.2% in terms of ERR.",
"SAKM model achieves slightly improvement compared to SA version.",
"Results show that on multi-level answer ranking task, our review mechanism allows question and answer interaction while encoding in a more fine-grain aspect and leads to better performance.",
"In this section, we divide our discussion into three parts, including the trade-off between information transmission and avoidance of overfitting, the relationship between sentence length and keyword percentage, the advantages of our SAKM model.",
"5.1 Trade-off between Information Transmission and Avoidance of Overfitting Our model processes QA text in a sequential style.",
"For the first hop, the original contents are fed as inputs.",
"Afterwards, the representations are updated based on previous outputs across hops, thus it is significant to convey rich mutual information.",
"Meanwhile, since the sentences are long and redundant, it is necessary to avoid overfitting.",
"Information Transmission across Hops In order to utilize context better, We propose sequential style to refine the sentence vectors through multiple hops.",
"When calculating the sentence representations, we take the average over outputs of all time steps instead of selecting the final hidden state, and get the final sentence representations based on all hops.",
"Additionally, in embedding layer we pretrain the word embeddings on the corpus using word2vec(Mikolov et al., 2013).",
"To guide the vectors update in a correct direction based on gradients, the loss function is calculated over all hops, and puts more weight on later ones.",
"Avoidance of Overfitting There are some tricks to reduce the risk of overfitting.",
"Our model extracts some keywords according to the attention weights, and applies dropout to ignore different neurons in different hops.",
"Besides, the GRU layer is shallow and the number of hops is suitable.",
"In the attention mechanism, we use f mask top k to reserve top percent of attention weights.",
"In this part, we propose two strategies to explore the relationship between the sentence length and keyword percentage.",
"we count the lengths of all questions and answers respectively, and then sort them in an ascending order.",
"We choose the value of the third quartile as the number of keywords in a question, while the value of the first quartile as the number of keywords in an answer.",
"This strategy allows us not to calculate percentage according to various lengths in the dataset.",
"Our experiments empirically show that it works well.",
"compute the number of keywords for all sentences.",
"Question: Since the answers are produced based on question sequences, the question generally contains more meaningful information such as inquiry type, inquiry main verb, topic and so on.",
"We empirically calculate the number of keywords k in a question of length x as follow: k = min(10 ln x, x ) (12) Answer: As for answers submitted by users in community forum, there exists redundant contents, typos errors, emoticon and other informal language usage.",
"Inspired by TF-IDF algorithm, in our experiments, we propose a heuristics rule to calculate the number of keywords k .",
"The length of an answer is denoted as x .",
"For the first part, we compute the Total-Length(TL) term as follow: T L ( x ) = x (cid:98) lg x (cid:99) (13) where (cid:98)(cid:99) is rounding down operation.",
"TL value increases monotonously with length of sentence.",
"For the second part, we compute the Inverse-Noise-Frequency(INF) term as follow: INF ( x ) = 1 lg 2 x (14) This term represents the percent of the meaningful words, which is an inverse proportion to noisy words.",
"Finally, the number of keywords k can be obtained by multiplying these two terms.",
"Simplicity: Our SAKM model is simple but outperforms on large CQA datasets.",
"The network is shallow and all of the parameters across hops are shared.",
"Our model is not complicated and has less parameters than other mentioned neural network models.",
"Table 4 demonstrates the complexity analysis of some models.",
"Our model is an end-to-end neural network, and trained via back-propagation automatically.",
"The SAKM model could be an universal way to learn sentence vectors effectively and integrated in other larger neural network models.",
"It could be a useful tool in building neural architecture based representations for text sequences.",
"Convergency: As the sentence representations are tuned and improved across hops in one epoch, it costs less epochs for our model to converge.",
"In our experiments, performance improve quickly in first ten epochs.",
"Effectiveness: The QA pairs capture features and information both from question text and answer text, iteratively updating and improving question and answer representations through hops.",
"At the test time, choosing the outputs of the last hop from sentence representation layer as sentence vectors can obtain better results on P@1 and MRR, demonstrating the effectiveness of improvement across hops.",
"SAKM model outperforms SA model by more than 3% on P@1 and 2% on MRR.",
"It proves that extracting keywords is significant and necessary.",
"Besides, even if we choose the first hop of SA model, the gains are significant compared to two-way attention model.",
"It means that our refinement procedure leads to better representations for all hops.",
"Table 5 shows the details.",
"(a) Visualization of SA at first hop",
"Given a pair of QA as example.",
"Q: How can i get a list of glenville high school graduates in clvevland ohio.",
"A: Try google with the school name locationor use classmatescom theres links there for back dated yearbooks.",
"Fig. 4 displays the heatmap of the attention weights.",
"We can observe that compared to the first hop of SA model, the last hop puts more weights on phrase glenville high school graduates in clvevland ohio .",
"Further more, the last hop of SAKM model focuses on keywords how can, glenville high school graduates.",
"It shows that the inquiry type words and topic words achieve more attention.",
"From this heatmap, it indicates that our SAKM model is reasonable and works well.",
"In this work, we propose a sequential attention with keyword mask model for CQA.",
"Our model handles answer selection task similar to human reading behavior.",
"The questions and answers review each other recurrently to improve the representations.",
"This proposed attention mechanism focuses on some keywords and filters other meaningless data.",
"We evaluate our model on two tasks, answering selection and multi-level answer ranking.",
"The experiment results demonstrate that our model outperforms on CQA datasets and enhance mutual information between QA pairs effectively.",
"This work was supported in part by the State Key Laboratory of Software Development Environment of China (No. SKLSDE-2019ZX-16)."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"other"
] |
[
"Targeted syntactic evaluations have demonstrated the ability of language models to perform subject-verb agreement given difficult contexts.",
"To elucidate the mechanisms by which the models accomplish this behavior, this study applies causal mediation analysis to pre-trained neural language models.",
"We investigate the magnitude of models' preferences for grammatical inflections, as well as whether neurons process subject-verb agreement similarly across sentences with different syntactic structures.",
"We uncover similarities and differences across architectures and model sizes notably, that larger models do not necessarily learn stronger preferences.",
"We also observe two distinct mechanisms for producing subject-verb agreement depending on the syntactic structure of the input sentence.",
"Finally, we find that language models rely on similar sets of neurons when given sentences with similar syntactic structure.",
"Targeted syntactic evaluations have shown that neural language models (LMs) are able to predict the correct token from a set of grammatically minimally different continuations with high accuracy, even in difficult contexts (Linzen et al., 2016; Gulordava et al., 2018), for constructions such as subject-verb agreement (van Schijndel et al., 2019), filler-gap dependencies (Wilcox et al., 2018), and reflexive anaphora (Marvin and Linzen, 2018).",
"As an illustration of the targeted syntactic evaluation paradigm, consider the following example, which demonstrates subject-verb agreement across an agreement attractor.",
"Here, a model using a linear Equal contribution.",
"analysis (i.e., inflecting based on the most recent noun) would choose the ungrammatical inflection, while a model using a hierarchical analysis would choose the grammatical inflection: (1) The key to the cabinets is/*are next to the coins.",
"While we have a reasonable understanding of the generally correct behavior of LMs in such contexts, the mechanisms that underlie models' sensitivity to syntactic agreement are still not well understood.",
"Recent work has performed causal analyses of syntactic agreement units in LSTM (Hochre-iter and Schmidhuber, 1997)-based LMs (Lakretz et al., 2019; Lu et al., 2020) or causal analyses of LSTM hidden representations' impact on syntactic agreement (Giulianelli et al., 2018), but the agreement mechanisms of Transformer-based LMs have not been as extensively investigated.",
"Transformer-based LMs' syntactic generalization abilities are superior to those of LSTMs (Hu et al., 2020), which makes Transformer-based models enticing candidates for further analysis.",
"We apply the behavioral-structural method of causal mediation analysis (Pearl, 2001) to investigate syntactic agreement in Transformers, following the approach used by Vig et al. (2020a) for interpreting gender bias in pre-trained English LMs.",
"This method allows us to implicate specific model components in the observed behavior of a model.",
"If we view a neural LM as a causal graph proceeding from inputs to outputs, we can view each model component (e.g., a neuron) as a mediator.",
"We measure the contribution of a mediator to the observed output behavior by performing controlled interventions on input sentences and observing how they change the probabilities of continuation pairs.",
"We focus primarily on GPT-2 (Radford et al., 2019), although we also analyze TransformerXL (Dai et al., 2019) and XLNet (Yang et al., 2019).",
"We find that both GPT-2 and Transformer-XL use two distinct mechanisms to accomplish subject-verb agreement, one of which is active only when the subject and verb are adjacent.",
"Conversely, XLNet uses one unified mechanism across syntactic structures.",
"Even though larger models assign a higher probability to the correct inflection more often, this does not necessarily translate to a larger margin between the probability of the correct and incorrect options.",
"Additionally, in larger models, agreement mechanisms are similar to those in smaller models, but are more distributed across layers.",
"Finally, we find that the most important neurons for agreement are shared across different structures to various extents, and that the degree of neuron overlap matches well with human intuitions of syntactic similarity between structures.",
"Many recent studies have treated neural LMs and contextualized word prediction modelsprimarily LSTM LMs (Sundermeyer et al., 2012), GPT-2 (Radford et al., 2019), and BERT (Devlin et al., 2019)as psycholinguistic subjects to be studied behaviorally (Linzen et al., 2016; Gulordava et al., 2018; Goldberg, 2019).",
"Some have studied whether models prefer grammatical completions in subject-verb agreement contexts (Marvin and Linzen, 2018; van Schijndel et al., 2019; Goldberg, 2019; Mueller et al., 2020; Lakretz et al., 2021; Futrell et al., 2019), as well as in filler-gap dependencies (Wilcox et al., 2018, 2019).",
"These are based on the approach of Linzen et al. (2016), where a model's ability to syntactically generalize is measured by its ability to choose the correct inflection in difficult structural contexts instantiated by tokens that the model has not seen together during training.",
"In other words, this approach tests whether the model assigns the correct inflection a higher probability than an incorrect inflection given the same context.",
"This approach investigates the output behavior of the model, but does not inform one of how the model does this or which components are responsible for the observed behavior.",
"A separate line of analysis work has investigated representations associated with syntactic dependencies by defining a family of functions (probes)",
"that map from model representations to some phenomenon that those representations are expected to encode.",
"For instance, several studies have mapped LM representations to either independent syntactic dependencies (Belinkov, 2018; Liu et al., 2019; Tenney et al., 2019b) or full dependency parses (Hewitt and Manning, 2019; Chi et al., 2020) as a proxy for discovering latent syntactic knowledge within the model.",
"Most related, Giulianelli et al. (2018) use probes to investigate how LSTMs handle agreement.",
"Probing is more difficult to interpret than behavioral approaches because the addition of a trained classifier introduces confounds (Hewitt and Liang, 2019): most notably, whether the probe maps from model representations to the desired output, or learns the task itself.",
"Probes also only give correlational evidence, rather than causal evidence (Belinkov and Glass, 2019).",
"See Belinkov (2021) for a review of the shortcomings of probes.",
"Causal inference methods study the change in a response variable following an intervention; for example, how do health outcomes change after a patient stops consuming nicotine products?",
"Causal mediation analysis (Robins and Greenland, 1992; Pearl, 2001; Robins, 2003) focuses on the role of a mediator in explaining the effect of a treatment on outcomes.",
"For example, if a patient stops using tobacco, are health outcomes mediated by the initial method of nicotine delivery (e.g., smoking tobacco vs. patches vs. nicotine gum)?",
"This approach lends itself well to interpreting NLP models, as we can view a deep neural network as a graphical model from input to output via mediators, where mediators can be individual components (e.g., neurons).",
"For LMs, the intervention is a change to the input sentence, and the outcome is a function of the probabilities of a set of continuations.",
"This approach for interpreting NLP models was introduced by Vig et al. (2020a), who implicate specific neurons and attention heads in mediating gender bias in various pre-trained LMs.",
"While one ideally expects equal preferences for male and female completions given gender-ambiguous contexts (for example, given the prompt u The nurse said that, we want p ( she | u ) p ( he | u ) ), this is not the case for subject-verb agreement, where we expect very strong preferences for grammatically Simple Agreement : The athlete confuses/*confuse Within Object Relative Clause: The friend (that) the lawyers *likes/like Across One Distractor : The kids gently *admires/admire Across Two Distractors : The father openly and deliberately avoids/*avoid Across Prepositional Phrase : The mother behind the cars approves/*approve Across Object Relative Clause: The farmer (that) the parents love confuses/*confuse Figure 1: Syntactic structures used in this study.",
"First, we define prompts u .",
"These prompts are a set of left contexts (beginnings of sentences), generated from a vocabulary and a set of templates developed by Lakretz et al. (2019).",
"We expand the vocabulary with additional tokens, and add relative clause (RC) templates.",
"We opt to synthetically generate prompts rather than sample from a corpus to control for the potential confound of token collocations in the training set.",
"We use prompts from six syntactic structures; an example of each may be found in Figure",
"1. For each structure, we randomly sample 300 prompts from all possible noun-verb combinations.",
"Our dataset, code, and random seeds are available on Github.",
"1 In the simple agreement' and within RC' constructions, there is no separation between the target subject and verb.",
"The across one distractor' and across two distractors' structures test the effect of placing one or two adverbs between the subject and verb.",
"Finally, the across PP' and across RC' structures test the effect of adding a noun (and verb in the latter structure) between the main subject 1 https://github.com/mattf1n/lm-intervention Size Layers Embedding size Heads Distil 6 768 12 Small 12 768 12 Medium 24 1024 16 Large 36 1280 20 XL 48 1600 25 Table 1: GPT-2 sizes used in this study.",
"and the main verb.",
"In the across RC' and within RC' structures, we measure effects both with and without the complementizer that .",
"2 In each of these constructions, we define a correct and an incorrect continuation.",
"Here, we focus on the third-person singular/plural distinction.",
"We focus primarily on GPT-2 (Radford et al., 2019), an autoregressive Transformer-based (Vaswani et al., 2017) English LM.",
"We use several GPT-2 sizes, including DistilGPT-2 (Sanh et al., 2020), a very small distilled version.",
"Table 1 gives model details for the different sizes of GPT-2.",
"To investigate how differences in training across Transformer-based architectures manifest themselves in syntactic agreement mechanisms, we also investigate Transformer-XL (Dai et al., 2019) and XLNet (Yang et al., 2019).",
"Transformer-XL is an autoregressive English LM whose training objective is similar conceptually to GPT-2's; however, it has a much longer effective context.",
"XLNet is an English LM which proceeds through various word order permutations of the input tokens during training, and which uses a distinct attention masking mechanism as well; during testing, it proceeds autoregressively through the input similar to the other two models.",
"We use the relative probabilities of the correct and incorrect tokens as a measure of the preference of a model (parameterized by ) for the correct inflection of a verb v v given prompt u u with number feature sg : y ( u sg , v ) = p ( v pl | u sg ) p ( v sg | u sg ) (1) 2 A comparison of total and indirect effects when including or excluding the complementizer may be found in Appendix C. where y < 1 indicates a preference for the correct inflection, and y > 1 indicates a preference for the incorrect inflection.",
"3 To obtain counterfactual inputs, we now define a class of interventions x that modify the prompts in u in a systematic way.",
"As we are concerned with the ability of models to choose correct inflections despite the presence of distractors and attractors, we define the intervention swap-number , which replaces the target subject with the same lexeme of the opposite number inflection (e.g., change au-thor to authors or vice versa).",
"We also define the null intervention, which leaves u as-is (as in Vig et al. 2020a).",
"Now we define y x ( u, v ) , which is the value of y under intervention x on prompt u .",
"Because the intervention swap-number entails swapping the subject for a noun of the opposite number, we now expect y > 1 in Equation 1 if the model prefers the grammatically correct form, since the verb that was originally the correct inflection is now incorrect and vice versa.",
"Note that under this definition, y swap-number ( u sg , v ) = 1 /y null ( u pl , v ) .",
"The total effect (TE) for the intervention swap-number (illustrated in Figure 2) is the relative change between the probability ratio y under the swap-number intervention and the ratio under the null intervention: TE ( swap-number, null ; y, u, v ) = y swap-number ( u sg , v ) y null ( u sg , v ) y null ( u sg , v ) = y swap-number ( u sg , v ) /y null ( u sg , v ) 1 = 1 / ( y null ( u sg , v ) y null ( u pl , v )) 1 (2) We interpret this quantity as the overall preference of a model for the correct inflection of v in context u .",
"Observe that this definition remains the same when sg and pl are swapped in Equation 2, therefore we do not specify whether u is plural or singular in TE ( swap-number, null ; y, u, v ) .",
"We are interested in the average total effect across prompts and verbs: TE ( swap-number, null ; y ) = E u,v (cid:20) y swap-number ( u, v ) y null ( u, v ) 1 (cid:21) (3) We calculate the average total effect for each syntactic construction for different sizes of GPT-2 3 We arbitrarily choose to start with sg ; we can swap sg and pl in Eq.",
"1 without loss of generality since we do not directly observe y .",
"This is clarified after Eq.",
"2. Figure 2: Total effects are measured by performing an intervention on the prompt (here, changing the grammatical number of the main subject), and measuring the relative change in the response variable (the ratio of probabilities of the originally incorrect verb form over the originally correct verb form).",
"and consider other models later on.",
"As a control, we also calculate total effects for models with random weights.",
"Unlike in Linzen et al. (2016), we do not measure accuracies by checking whether one probability is higher than another.",
"Rather, the total effect quantifies the margin between the probabilities of correct and incorrect continuations with some intervention.",
"Because larger models tend to exhibit correct subject-verb agreement more often than smaller ones (Hu et al., 2020; van Schijndel et al., 2019), we hypothesize that larger models will generally have larger TEs for the same structure (i.e., we predict that higher accuracy is indicative of larger margin between probabilities).",
"Figure 3 presents total effects by structure for various sizes of GPT-2.",
"For models with random weights, TEs are always near-zero, and as such are not shown in the figure.",
"In simple agreement' and within RC', where there is no separation of subject and verb, TEs vary between 1,000 and 5,000, depending on model size.",
"This is far higher than the TEs below 250 reported for gender bias in Vig et al. (2020a), which is to be expected: GPT-2's training objective explicitly optimizes for predicting (ideally grammatically correct) tokens given a context.",
"Unlike Vig et al. (2020a), we do not observe larger TEs for larger models.",
"Adverbial distractors increase total effects.",
"TEs are even higher for structures where distractors are present, with DistilGPT-2 and GPT-2 Small attaining the highest TEs in such contexts.",
"This is surprising, as one might expect subject-verb agreeS i m p l e 1 d i s t r a c t o r 2 d i s t r a c t o r s S i n g u l a r P P P l u r a l P PS i n g u l a r RCP l u r a l RCW i t h i n s i n g u l a r RCW i t h i n p l u r a l RC Structure 0 5000 10000 15000 20000 25000 30000 35000 T o t a l E ff e c t S i n g u l a r P P P l u r a l P PS i n g u l a r RCP l u r a l RC 0 500 1000 1500 DistilGPT-2 GPT-2 Small GPT-2 Medium GPT-2 Large GPT-2 XL Figure 3: Total effects for each structure by model size for GPT-2.",
"ment accuracy to decline as the distance between the subject and the verb increases.",
"We suspect that adverbs are acting as cues that a verb will soon appear, thus increasing the probability of both the correct and incorrect verb, but increasing that of the correct verb more (for similar findings in human sentence processing, see (Vasishth and Lewis, 2006)).",
"Additional analysis supports this hypothesis; see Appendix B. Attractors decrease total effects.",
"When PPs or RCs separate the subject and verb, TEs decrease.",
"The number of the attractor does not significantly change TEs across PPs, but does have a more notable effect across RCs: GPT-2 is more certain of its choices across singular RCs than across plural RCs, as evidenced by higher TEs for the former.",
"Notably, GPT-2 Medium tends to achieve the highest TEs in attractor structures, except in the across plural RC' structure.",
"Total effect measures the effect of swapping the number of the subject, but does not distinguish the case where the original subject (before swapping) was singular from the case where it was plural.",
"To investigate the effect of the original subject number on the model's preference for the correct (or incorrect) inflection, we define the metric grammaticality margin (referred to hereafter as grammaticality ) as the reciprocal of y given prompt u with a specific number feature sg or pl : G ( u sg , v ) = 1 /y ( u sg , v ) G ( u pl , v ) = 1 /y ( u pl , v ) (4) Recalling the definition of y , this measure is the probability ratio between the model correctly and incorrectly resolving subject-verb agreement.",
"We define G as the reciprocal of y so that when the model has a high preference for the correct inflection over the incorrect inflection, G is large.",
"Differences in grammaticality values for plural and singular subjects can indicate systematic biases toward a certain grammatical number.",
"We expect this quantity to be lower if there is an attractor of a different number from the subject, whereas we expect it to increase if the attractor is of the same number as the subject.",
"Figure 4 presents grammaticality values separately for singular and plural subjects, as well as singular and plural attractors when applicable.",
"While we expect higher grammaticality values when the subject number matches the attractor number, we instead observe that plural subjects always have higher grammaticality values regardless of the structure or attractor number.",
"In other words, it is always easier for GPT-2 to form agreement dependencies between verbs and plural subjects than singular subjects.",
"This may be due to plural verbs being encoded as defaults in GPT-2, as was found for LSTM LMs in Jumelet et al. (2019).",
"This S i m p l e 1 d i s t r a c t o r 2 d i s t r a c t o r s S i n g u l a r PPP l u r a l PPS i n g u l a r RCP l u r a l RCW i t h i n s i n g u l a r RCW i t h i n p l u r a l RC Structure 0 100 200 300 G r a mm a t i c a li t y Singular subject Plural subject Figure 4: Grammaticality for each structure for GPT-2 Medium.",
"would make intuitive sense, because singular third person verbs are marked in English present-tense.",
"Attractors that separate subjects and verbs decrease grammaticality, regardless of plurality.",
"The same is not true of distractors: placing adverbs between the subject and verb tends to have little effect, even though the across two distractors' structure places the same token distance between subject and verb as across a PP'.",
"This means that distance between subject and verb is less important than the type of the structure separating them.",
"As expected, when holding the subject number constant (i.e., looking only at blue bars or only at orange bars in Figure 4), grammaticality values are higher when the attractor has the same number as the subject.",
"Attractors that precede subjects have number-dependent impacts on grammaticality.",
"In the within singular RC' structure, grammaticality is only slightly reduced for both singular and plural subjects compared to the simple agreement' structure.",
"However, within plural RC' has a polarizing effect: grammaticality is greatly reduced for singular subjects, but greatly increased for plural subjects.",
"This is the only attractor structure with higher grammaticality than the simple case.",
"The natural indirect effect (NIE), illustrated in Figure 5, is the relative change in the ratio y when the prompt u is not changed, but a model component Figure 5: Indirect effects are measured by setting an individual neuron to the value it would have taken had the intervention occurred, then measuring the relative change in the response variable.",
"z (e.g., a neuron) is set to the value it would have taken if the intervention had occurred.",
"NIE ( swap-number, null ; y, z ) = E u,v (cid:20) y null , z swap-number ( u,v ) ( u, v ) y null ( u, v ) 1 (cid:21) (5) This allows us to evaluate the contribution of specific parts or regions of a model to the syntactic preferences we observe.",
"More specifically, we can measure to what extent the total effect of swapping the subject on inflection preferences can be attributed to specific neurons.",
"Here, we independently analyze the individual neuron NIEs for GPT-2, Transformer-XL, and XLNet (future work could also investigate intervening on sets of neurons simultaneously).",
"We also attempt to analyze attention heads for GPT-2, though we find that they do not present consistent interpretable results with the swap-number intervention (see Appendix A).",
"This is consistent with the findings of Htut et al. (2019) who do not find a straightforward connection between attention weights and the model's syntactic behavior.",
"Based on the findings of prior probing work on dependency parsing (Hewitt and Manning, 2019), we hypothesize that NIEs will peak in the upper-middle layers for all models.",
"Because XLNet is exposed to all word order permutations of its input sentences during training, we hypothesize that it will display similar indirect effect results across syntactic structures.",
"Conversely, GPT-2 and Transformer-XL always process input left-to-right, so we expect that for these two models, differing syntactic structures will yield unique indirect effect results.",
"For each model and structure, we select the 5% of neurons with the highest NIE in each layer; Figure 6 compares NIEs across model sizes, and Figure 7 compares NIEs across structures for GPT-2 Medium.",
"4 We observe two distinct layer-wise contour patterns.",
"In structures where the target verb directly follows the subject (simple agreement' and within RC', the top 3 plots in Figure 6), NIEs continually increase in higher layers.",
"Conversely, for structures with subject-verb separation (across one/two distractor(s)', across PP', and across RC', the bottom 3 figures in Figure 6), NIEs peak at layer 0 and (more notably) in the 4 We also produced figures using all neurons.",
"When doing so, the contour of the graph across layers did not change, but the magnitudes were lower since we average over more neurons.",
"upper-middle layers.",
"This is in line with the probing results of Hewitt and Manning (2019) and Tenney et al. (2019a), who find that the highest amount of syntactic information is encoded in the upper-middle layers.",
"In the final layers of the model, the effect decreases sharply, reaching 0 in the uppermost layers.",
"The peak NIE is lower here than for structures where there is no separation, perhaps indicating that syntactic agreement information is localized in fewer neurons when separation occurs.",
"Even a single token between subject and verb brings about this second indirect effect contour, indicating that distance is a less important fac-tor than the presence of any separation in invoking this second syntactic agreement mechanism .",
"The distinct indirect effect contours for the adjacent and non-adjacent cases may indicate distinct subject-verb agreement mechanisms for shortand long-distance agreement, consistent with similar findings for LSTMs (Lakretz et al., 2019).",
"As a control, we repeated the experiment for GPT-2 with randomized weights.",
"We find that for all structures, when weights are randomized, indirect effects peak at layer 0albeit at values perhaps too small to be meaningfuland then remain close to 0 in higher layers.",
"This indicates that the vast majority of the indirect effect observed for trained models is an outcome of learning from the training data rather than of the architecture.",
"NIEs are also more distributed across layers for larger models.",
"This suggests that structural knowledge is concentrated in fewer neurons with stronger inflectional preferences in smaller models, and is more distributed across neurons in larger models.",
"Nonetheless, the overall contour of NIEs is similar across model sizes for a given structure, indicating that mechanisms of agreement are similar across model sizes .",
"We also investigate the neuron NIEs of Transformer-XL (Figure 8) and XLNet (Fig-ure 9) to observe whether syntax is represented in a similar manner across models (for total effects across architectures, see Appendix E).",
"Local and non-local agreement diverges in a similar way in GPT-2 and Transformer-XL.",
"The layer-wise contour is similar for simple agreement' and within RC' across the two architectures, and differs significantly from the cases where subject and verb are separated, which is again similar across architectures.",
"This supports our hypothesis that GPT-2 and Transformer-XL encode syntax in a similar manner.",
"Indirect effects in XLNet are different to those seen in GPT-2.",
"In XLNet, we do not observe the same dichotomous behavior between subject-verb adjacent and subject-verb non-adjacent structures; rather, the overall contours are all similar.",
"All of the indirect effects approach 0 in the final layer.",
"This resembles the contours from GPT-2 and Transformer-XL for structures where subject and verb are not adjacent.",
"We conjecture that this pattern arises because XLNet observes many word order permutations of the same inputs during training; this acts as a form of regularization that prevents it from evolving bifurcating mechanisms for local and non-local dependencies.",
"While Sinha et al. (2021) found that natural word order during pre-training matters little for downstream performance on tasks in benchmarks like GLUE (Wang et al., 2018), they also found that randomizing word order greatly reduced model preferences for correct inflections in syntactic evaluation stimuli.",
"This findingcoupled with the distinct word-order-dependent agreement mechanisms that we discoversuggests that models do make use of word order information, rather than just higher-order word collocation statistics.",
"The layer-wise NIE contours in Section 6.1 show the NIE of the top neurons in each layer, but do not show which neurons make it into the top 5%.",
"To investigate whether the same neurons are implicated in subject-verb agreement across structures, we select the top 5% of neurons per layer by NIE and calculate the proportion of these high-NIE neurons that overlap between each pair of structures.",
"Does the extent of neuron sharing across structures correlate with human intuitions of syntactic similarity?",
"To address this question, we compute hypothesized syntactic similarities between structures based on the following linguistic features: distance between subject and verb; presence of adverbial distractors, a relative clause, prepositional phrase, and/or a noun attractor; and the number of the noun attractor when present.",
"Appendix D.1 provides additional details on the calculation of ground-truth similarity.",
"To quantify the similarity of the hypothesis matrix and a neuron overlap matrix, we calculate the (cid:96) 1 norm 5 of the element-wise difference between the lower-left triangle of both matrices, as the matrices are symmetric.",
"We exclude the diagonal.",
"5 Using the (cid:96) 2 norm does not change which layer in each model has the lowest difference norm.",
"For each model, we present neuron overlaps for the layer with the lowest difference norm to the hypothesis (Figure 10; for an analysis of layer-by-layer overlap change for GPT-2, see Appendix D.2).",
"The lowest difference norms are 443 (GPT-2), 510 (Transformer-XL), and 486 (XLNet).",
"GPT-2 Medium's overlap across structures at layer 21 (of 24) is visually similar to the hypothesis, indicating that this layer in GPT-2 shares neurons for subject-verb agreement across structures in a way that aligns with human intuitions about syntactic similarity.",
"Interestingly, it learns to do this without receiving explicit syntactic supervision during training.",
"Layer 15 (of 18) of Transformer-XL displays similar trends to GPT-2, though the extent of overlap is higher across structures in general here.",
"There is more significant overlap between the adverbial distractor structures and the structures that contain attractors.",
"Simple agreement' also has more overlap with structures containing attractors than within RC', which is contrary to our hypothesis matrix.",
"We also note that across singular RC' has more overlap with across PP' than across plural RC' (and vice versa for across plural RC'), indicating that the number of the attractor is more salient to Transformer-XL than the structure of the phrase containing the attractor.",
"Layer 8 (of 12) of XLNet gives rise to a noisier similarity matrix.",
"There is slightly more overlap between structures across noun attractors, but the extent of overlap is smaller compared to other models.",
"This suggests that more of the neurons are specialized to processing specific structures.",
"However, the indirect effect findings for XLNet suggest a more unified mechanism for syntactic agreement across all structures; if this were the case, we would expect neuron overlap to be high, and for the extent of overlap to be similar across all structures, rather than being higher between more similar structures.",
"We observe the latter, but not the former.",
"Regardless, both observations further support our hypothesis that XLNet uses different mechanisms to resolve number agreement than the other two architectures.",
"This study applied causal mediation analysis to discover and interpret the mechanisms behind syntactic agreement in pre-trained neural language models.",
"Our results reveal the location and importance of various neurons within various models, and provide insights into the inner workings of these LMs.",
"For future work, we suggest intervening on groups of neurons and attention heads to see how these components work together, and extending the analysis to phenomena such as filler-gap dependencies and negative polarity items.",
"Further work should also explore the impact of specific verbs on syntactic agreement mechanisms (Newman et al., 2021).",
"Lastly, we suggest examining examples where the model makes incorrect predictions to determine how models misuse the mechanisms from Section 6.1.",
"Y.B. was supported in part by the ISRAEL SCIENCE FOUNDATION (grant no. 448/20) and by an Azrieli Foundation Early Career Faculty Fellowship.",
"A.M. was supported by a National Science Foundation Graduate Research Fellowship (grant no. 1746891).",
"In this paper, we apply causal mediation analysis in order to study the subject-verb agreement mechanisms in language models.",
"While the focus of this work is on the analysis itself, our insights may influence the training strategies for new models.",
"Specifically, our findings on the relationship between model size and syntactic agreement and the comparison of different model architectures may help researchers decide which model to use.",
"In doing so, others may try to extrapolate our findings, which are limited to the domain of specific syntactic structures and subject-verb agreement in English language models, to other tasks and languages for which we cannot make these claims.",
"The focus on English of this study additionally furthers the discrepancy compared to other languages which continue to be studied much less.",
"Moreover, we do not study mitigation mechanisms for our findings and thus do not know the consequences of modifying the training procedures of language models beyond the three examples we studied.",
"One concrete example for a case where our findings could have wider impact regards our finding that models have higher grammaticality for plural subjects.",
"Others may find that this is undesired behavior and thus try to augment their training data to increase the number of subjects in singular form, which could have unanticipated consequences on model performance and mechanisms.",
"Beyond the concrete findings in this paper, there are also broader considerations in the popularization of causal mediation analysis.",
"Specifically, as pointed out by Vig et al. (2020a), it is a challenging problem to extend the effect measures beyond binary cases.",
"While subject-verb agreement is by nature a binary problem, there are many others that benefit from a more nuanced view, specifically in topics related to fairness and bias.",
"Thus, by popularizing an approach that is easier to apply in a binary case, we may have the unintended effect of complicating analyses conducted by others who want to follow our approach.",
"As an active mitigation, we direct readers to the extended version of Vig et al. (2020b), which discusses effect measures beyond the binary case."
] | [
"abstain",
"method",
"objective",
"objective",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Increasing the input length has been a driver of progress in language modeling with transformers.",
"We identify conditions where shorter inputs are not harmful, and achieve perplexity and efficiency improvements through two new methods that decrease input length.",
"First, we show that initially training a model on short subsequences before moving on to longer ones both reduces overall training time and, surprisingly, substantially improves perplexity.",
"Second, we show how to improve the efficiency of recurrence methods in transformers, which let models condition on previously processed tokens when generating sequences that exceed the maximal length the transformer can handle at once.",
"Existing methods require computationally expensive relative position embeddings; we introduce a simple alternative of adding absolute position embeddings to queries and keys instead of to word embeddings, which efficiently produces superior results.",
"We show that these recurrent models also benefit from short input lengths.",
"Combining these techniques speeds up training by a factor of 1.65, reduces memory usage, and substantially improves perplexity on WikiText-103, without adding any parameters.",
"1 1 Introduction Scaling up transformer (Vaswani et al., 2017) language models (Radford et al., 2019; Lewis et al., 2019; Raffel et al., 2019; Brown et al., 2020) has been an important driver of progress in NLP.",
"Language models require data to be segmented into subsequences for both training and inference: memory constraints limit a language model to handling at most a few thousand tokens at once, while many training and evaluation datasets are much longer.",
"Recent work focuses on increasing the length of input subsequences, which determines the maximum number of tokens a model can attend to (Baevski and Auli, 2018; Sukhbaatar et al., 2019; Kitaev",
"et al., 2020; Roy et al., 2020).",
"We challenge the assumption that longer input subsequences are always better by showing that existing transformers do not always effectively use them.",
"We then introduce new methods based on shorter input subsequences that improve runtime, memory efficiency, and perplexity.",
"We first investigate how input subsequence length affects transformer language models (3).",
"Nave evaluationwhere we split a large evaluation set into multiple nonoverlapping subsequences, each evaluated independentlyinitially supports the commonly-held belief that models that train and do inference on longer subsequences achieve better perplexity (Table 1, col. 3).",
"However, when we evaluate each model with a sliding window (Baevski and Auli, 2018), outputting one token at a time using the maximal amount of context, we findsurprisinglythat models using subsequences exceeding 1 , 024 tokens do not further improve performance (Table 1, col. 5).",
"We conclude that the performance gains (using nave evaluation) of models that use longer subsequences occur not only because of their better modeling ability, but partly because they divide the evaluation set into longer subsequences.",
"This di-vision helps because of an issue we call the early token curse : by default, early tokens in a subsequence will have short histories to attend to.",
"Using longer subsequences means fewer tokens will suffer from the early token curse.",
"For example, when using inputs of length 1 , 024 , about 94% of tokens get to attend to more than 64 preceding tokens.",
"If we use inputs of length 128 , only 50% of tokens get to attend to 64 or more preceding tokens.",
"Based on this analysis, we explore how to improve models by using shorter inputs.",
"We introduce two techniques.",
"Staged Training (4) First, we show that initially training on shorter subsequences (before moving to longer ones) leads not only to much faster and more memory-efficient training, but it surprisingly also greatly improves perplexity, suggesting that longer inputs are harmful early in training.",
"Position-Infused Attention (5) Second, we consider a natural way to avoid the early token curse during training and inference: attending to cached representations from the previously evaluated subsequence (Dai et al., 2019).",
"This approach interferes with conventional absolute position embeddings in a way that forced Dai et al. to use relative position embeddings, which are computationally expensive.",
"We introduce a fast, simple alternative: instead of adding absolute position embeddings to word embeddingsthereby entangling a word's content and positional informationwe add them to the keys and queries in the self-attention mechanism (but not to the values).",
"This does not increase parameter count or runtime.",
"Token representations can then be cached and reused in subsequent computations.",
"We show that when using this method, shorter subsequence models outperform longer ones.",
"Finally, we show additive gains from combining staged training and position-infused attention (Shortformer, 6), resulting in a model that trains much quicker and achieves better perplexity on WikiText-103.",
"We also show that these results transfer to language modeling on the Toronto Book Corpus (A.5, appendix).",
"Transformer language models map a list of tokens x n L : n 1 to a probability distribution over the next token x n .",
"We refer to the list of tokens as the current input subsequence (whose length is L ).",
"Causal masking lets us make L predictions at once, with the prediction for token i + 1 conditioned on the i th token and all previous inputs x n L : i 1 , but not on future inputs.",
"We define the number of tokens the model can attend to at each timestep as its effective context window .",
"Note that L is not to be confused with the (typically much greater) length of a training or evaluation dataset.",
"During inference, language models can be used for two distinct tasks: generation and evaluation.",
"Nonoverlapping Inference To evaluate a string longer than L , we can evaluate each subsequence of L tokens independently.",
"This fast approach is commonly used during training; if used, tokens in one subsequence cannot condition on those in the previous subsequence, giving rise to the early token curse discussed in 1.",
"See Figure",
"1(a).",
"Sliding Window Inference An alternative to the above is to use a sliding window during inference.",
"Here, we choose a stride S between 1 and L 1 and advance the window by S tokens after each forward pass.",
"2 This means that L S tokens from the previous block are re-encoded, and only S new tokens are outputted.",
"The advantage is that all outputs in each subsequence after the first have at least L S previous tokens to condition on.",
"However, since tokens must be re-encoded multiple times, this approach is much slower.",
"When S = 1 , we output one token every inference pass, each using the maximal context window, but this is the slowest approach.",
"See Figure",
"1(b).",
"Minimal and Maximal Effective Context Window Sizes In the nonoverlapping approach, the min. and max.",
"effective context window sizes are 1 and L , respectively.",
"In the sliding window approach, the max.",
"context window size is still L , but the min. context window size is now L S + 1 .",
"Evaluation vs. Generation In evaluation , a model assigns a perplexity score to a given sequence.",
"Evaluation is done using either nonoverlapping inference or with a sliding window of any stride; since we already have the target sequence we can simultaneously make predictions for multiple timesteps using causal masking.",
"In generation , a model generates a new sequence, as in demonstrations of GPT-3 (Brown et al., 2020).",
"Generation is done only with a sliding window with stride S = 1 , which we refer to as token-by-token generation.",
"During generation, we append to the input a single new token, get a prediction from the model about the next token (e.g., using beam search or picking the token with the highest probability); the process is then repeated.",
"3 2 Nonoverlapping inference can be viewed as sliding window inference with stride L .",
"3 In this paper we do not consider open-ended generation; we generate the dev.",
"set, and for next-token prediction we",
"Experimental Setup Our baseline is the Baevski and Auli (2018) model, henceforth B & A , trained and evaluated on WikiText-103 (Merity et al., 2016).",
"We use this baseline because of its prominent role in recent language modeling developments (Khandelwal et al., 2020; Press et al., 2020).",
"The training set contains 103.2 million tokens from English Wikipedia.",
"The B & A model has 16 transformer layers of dimension 1 , 024 , with 8 heads in each self-attention sublayer, and feedforward sublayers with an inner dimension of 4 , 096 .",
"This model ties the word embedding and softmax matrices (Press and Wolf, 2017; Inan et al., 2017) and uses sinusoidal position embeddings.",
"It has a subsequence length of 3 , 072 tokens and achieves a perplexity of 18.65 0.24 (std. dev.) on the development set.",
"In our experiments, other than varying the subsequence length, we modify no other hyperparameters, including the random seed and number of training epochs (205).",
"Segmenting a corpus into subsequences results in different effective context windows for different timesteps depending on where they fall in a segment.",
"Subsequence length L is an upper bound on the effective context window at each timestep.",
"When making the first prediction, the model attends only to the first input token.",
"When making the second prediction, the model attends to the first two inputs, and so on, up to the L th timestep where the model can attend to all input tokens when making the L th prediction.",
"Table 1 explores the effect of subsequence length in the B & A model on training runtime and on dev.",
"set perplexity and runtime.",
"4 We fix the number use the ground truth token.",
"This has the same complexity as sampling the token with the highest probability.",
"4 For consistency, throughout the paper we run inference with a batch size of one.",
"of tokens in each batch to 9 , 216 but vary the subsequence length L and batch size (so the product of the batch size and subsequence length remains at 9 , 216 ).",
"We report results for both nonoverlapping inference and sliding window inference with stride S = 1 , which generates only one new token per forward pass; it thus has the maximal effective context window for each generated token.",
"We find that performance increases as S decreases until it reaches a peak and then stops improving (not shown in Table 1).",
"5 We derive the following conclusions: Training on long sequences is expensive.",
"Models trained on subsequences of length 256 are twice as fast as models trained on subsequences of 3 , 072 tokens, but gains for even shorter lengths are negligible (Tab. 1, col. 2).",
"nonover-L = 512 to run slowly (in N.o. eval.), although during batched",
"N.o. eval.",
"they are slightly faster than the L = 512 model.",
"5 For example, the L = 3 , 072 model's performance peaked at S = 512 (used in Baevski and Auli (2018)) and then stopped improving.",
"Thus, the result shown in Table 1 for that model with S = 1 can also be achieved with S = 512 even though that runs 500 times faster, at 2.5k",
"tok./sec.",
"lapping evaluation, we see a monotonic decrease in dev.",
"perplexity when increasing L (Tab. 1, col. 3).",
"Increasing the minimum effective context window size is more important than increasing the maximum one.",
"Using a sliding window for token-by-token evaluation substantially improves results for all models (Tab. 1, col. 5).",
"Here, we see negligible improvement between the models trained with subsequence lengths of 1 , 024 and 3 , 072 tokens (0.05 perplexity).",
"This approach improves results by increasing the minimum amount of context available at each timestep which indicates that long contexts may not be beneficial to transformer models, but very short contexts are harmful.",
"However, sliding window inference can be expensive since each token is encoded many times.",
"For example, token-by-token inference for the L = 3 , 072 model is almost 300 times slower than nonoverlapping inference.",
"We propose a two-stage training routine that initially uses short input subsequences followed by long subsequences.",
"6 This method was previously applied to speed up the training of BERT (Devlin et al., 2019), but we show that it also improves perplexity.",
"We use sinusoidal position embeddings; learned position embeddings, which we do not consider, create a dependency between the parameterization and subsequence length.",
"In our experiments, we neither modify nor reset the state of the optimization algorithm between the two stages.",
"Our experimental setup is described in 2.",
"We do not change any hyperparameters other than reducing subsequence length while correspondingly increasing batch size to keep the number of tokens per batch constant.",
"As in the baseline, all models are trained for 205 epochs.",
"All models are trained in two stages; the second stage always uses a subsequence length of 3 , 072 , 6 Curriculum learning (Bengio et al., 2009) trains on easier inputs before progressing to harder ones.",
"Our approach does not change the order in which the training examples are given to the model, but instead modifies their lengths.",
"since that lead to the best performance (discussed at end of this subsection).",
"Appendix Table 6 shows the time each training routine takes to match the baseline model's performance on the validation set of WikiText-103.",
"7 Many configurations match this performance in less than half the time it takes to train the baseline itself; some reach baseline performance in only 37% of the time needed to train the baseline.",
"Although all models take less time to train than the baseline, Table 2 shows that many outperform it.",
"For example, the best modeltrained with subsequence length L = 128 until epoch 50 outperforms the baseline by 1.1 perplexity despite completing training in 87% of the time the baseline takes to do so.",
"The model that trains with L = 128 until epoch 100 achieves similarly strong results (17.62 perplexity) and finishes training in 74% of the time it takes the baseline.",
"8 These results are very robust to the choice of initial stage subsequence length and number of epochs.",
"Table 2 shows that all models with an initial stage of L = 1 , 024 tokens or less that switch to the second stage at epoch 125 or before beat the baseline by a large margin at the end of training.",
"Additionally, Appendix Table 6 shows that those models match the baseline's perplexity in at most 71% of the time it takes to train the baseline.",
"When we use nonoverlapping evaluation, the B & A baseline obtains 18.65 perplexity on the development set; our best model obtains 17.52.",
"When we use sliding window evaluation (following Baevski & Auli, we use stride S = 512 ), our best 7 Table 7 in the appendix shows the epoch at which every model matched the baseline's performance.",
"8 Table 8 in the appendix shows the total time it took to train each model.",
"model obtains 16.89 perplexity, a large improvement on the 17.92 B & A result in that setting.",
"On the test set, using the same sliding window evaluation, our model obtains 17.56 perplexity, a substantial gain over the baseline's 18.70 test-set perplexity.",
"Appendix Table 10 shows that our best model uses almost five times less memory during the first stage than the baseline.",
"We also found that setting L to less than 3 , 072 tokens in the second stage degraded performance.",
"(Appendix Table 9 shows staged training results with an initial stage length of 128 for 50 epochs (as in the best model) and varying lengths for the second stage.",
"We found this to also be true for other initial stage lengths and",
"epochs.) Unlike results in Table 1, where we show that models with L larger than 1 , 024 do not substantially improve token-by-token generation perplexity, models trained using staged training improve when given longer inputs (Appendix Table 9).",
"Further, we explored using more than two stages (up to six), but this did not outperform our two-stage curriculum.",
"Sliding window inference substantially improves performance by increasing the minimum effective context window size.",
"But it is very slow.",
"We could solve this by letting the model attend to representations of the previous subsequence during inference on the current one.",
"In this case, the same token representations would be used in different positions since a token generated near the end of one subsequence would be cached and reused near the start of the next one.",
"However, transformer model representations entangle positional and content information, so a cached token representation would encode an incorrect position when reused in a new position.",
"TransformerXL (Dai et al., 2019) uses relative position embeddings to solve this problem.",
"However, that approach is slower and uses more parameters and memory than the baseline transformer.",
"9 We solve this using no extra parameters, memory, or runtime.",
"We also show that our method can use much shorter input subsequences and still achieve superior performance.",
"Transformer Language Models The baseline transformer LM, given a token list T of length L and a tensor P containing the first L position embeddings, produces L next-token predictions using the following procedure: 1. Embed each token in T , producing tensor X .",
"2. Add the position embedding of each index to the token at that index: X = X + P .",
"3. Feed X through each transformer layer.",
"The self-attention sublayer in each transformer layer is invoked as follows: self-attention ( key = X , query = X , value = X ) 4. Transform the outputs of the last transformer layer using the softmax layer, giving L next-token probability distributions.",
"We propose to let the model reuse previous outputs by making each output contain no explicit positional information.",
"To do this, we modify the 9 The self-attention coefficients between q queries and k keys in TransformerXL are the sum of two dot products of size q k ; the unmodified attention sublayer and our PIA method both compute only one dot product of size q k .",
"We also benchmarked the TransformerXL model using its publicly released code and found that their relative position embeddings slow inference by 22% and require 26% more parameters than their implementation of the unmodified self-attention sublayer.",
"model so that it does not add position embeddings at the beginning of the computation (step 2), but rather adds them to the query and key vectors at each layer (but not to the value vectors).",
"The outputs at each layer are the transformed, weighted sums of the value vectors, and, since the value vectors in our model do not contain explicit positional information, the outputs also do not.",
"Although PIA sublayer outputs contain no explicit positioning information, the attention mechanism can still compute position-dependent outputs because positional information is added to the query and key vectors.",
"Our method is implementable in just a few lines of code.",
"In the unmodified transformer, to generate a string whose length exceeds L , it would have to be split into separate subsequences, and the model would be unable to attend to the previous subsequence when generating the current one.",
"Using PIA, we can store and attend to representations of the previous subsequence since they no longer contain any explicit positioning information.",
"Therefore, all our PIA models use a cache, where representations from the previous forward pass are stored and attended to in the next forward pass.",
"Caching makes generation faster.",
"The complexity of the attention mechanism is O ( q k ) where q is the number of queries (outputs) and k is the number of key-value pairs (inputs).",
"To generate a sequence whose length exceeds L using token-by-token generation in the unmodified transformer (with subsequence length L ), attention takes O ( L 2 ) time (since there are L queries and L keys).",
"Using PIA and caching, we can reuse L 1 of the previous outputs at every layer.",
"Thus, our attention sublayer takes O ( L ) time (because now there is a single query and L keys).",
"Our approach is useful in scenarios where we need to evaluate or generate sequences that are longer than the model's subsequence length.",
"Therefore, it would not be applicable to sequence-to-sequence tasks such as sentence-level translation, where sequence lengths are short.",
"Most language models, including B & A , train on their data as nonoverlapping subsequences.",
"This means that training subsequences can be shuffled at each epoch and consumed in random order.",
"However, when using PIA, we would like the cache to contain the previous subsequence.",
"We therefore do not shuffle the data, making the cached subsequence the previously occurring one.",
"Figure",
"1(c) depicts training with a cache that contains representations of the previous subsequence.",
"We use the experimental setup described in 2.",
"The B & A baseline achieves 18.65 on the development set.",
"We train two additional baselines, the first uses PIA without caching and the second uses caching but no PIA.",
"If just PIA is used (without caching), performance degrades to 19.35 perplexity, but the model's speed and memory usage do not change.",
"Using caching without PIA severely hurts performance, obtaining 41.59 perplexity.",
"Disabling data shuffling in the PIA-only model achieves similar performance to that model when it does use data shuffling, at 19.44 perplexity.",
"Not shuffling the data is necessary for recurrent-style training that caches previously computed subsequence representations.",
"Our next experiments use the recurrent-style training of Dai et al. (2019), where we receive L new tokens at every training iteration and attend to L (cid:48) cached representations (of the subsequence of tokens that came immediately prior to the L new to-kens).",
"As before, we output L predictions at every training iteration.",
"This means that the maximal and minimal effective context window sizes are L (cid:48) + L and L (cid:48) + 1 , respectively.",
"In all our models with PIA and caching, we set L (cid:48) = L because a manual exploration of different models where L (cid:48) (cid:54) = L did not yield better results.",
"Table 3 compares the results of our models that use PIA and caching to the baseline on the WikiText-103 dev.",
"set.",
"Evaluation and generation speeds are shown in the nonoverlapping (N.o.) and sliding window (S.W., with stride S = 1 ) speed columns, respectively.",
"10 Unlike in the baseline, token-by-token evaluation in our model achieves the same perplexity as nonoverlapping evaluation 10 Note that Baevski and Auli (2018) show that the baseline model can also achieve 17.92 during S.W. evaluation, when S = 512 , with a speed of 2.5k tokens per second.",
"since in both cases, the predictions for each input subsequence are conditioned not only on the current input, but also on the previous input, making the context window the same in both inference modes (in both cases, at every timestep, the context window is all tokens up to that timestep).",
"Table 3 shows that as we increase subsequence length, perplexity improves, peaking at 512 before starting to degrade.",
"Our best model obtains 17.85 perplexity, which is multiple standard deviations better than the baseline (18.65, N.o.).",
"Table 5 in 6 shows a similar gain on the test set.",
"The best model runs 1% slower than the baseline during N.o. eval.",
"(since caching reduces the speed gain from smaller attention matrices in this mode).",
"Table 10 (appendix) shows that it uses less than half of the memory the baseline does during training.",
"Our best model trains 55% faster than the baseline.",
"Our best model, with subsequence length 512 , has attention matrices of size 512 1 , 024 (since we have 512 queriesone per every new tokenand 1 , 024 keys and 1 , 024 valuesone per every new token and every cached token).",
"In the baseline, all attention matrices are of size 3 , 072 3 , 072 .",
"Caching previously computed representations lets us do token-by-token generation efficiently when generating more than L tokens.",
"Our model is nine times faster than the baseline at token-by-token generation even as it achieves better perplexity and uses much less memory (Tab. 3, col. 5).",
"PIA and caching also greatly improve perplexity on the Toronto Book Corpus; see A.5 in the appendix.",
"To assess whether the gains from staged training, PIA and caching are additive, we take our best caching PIA model, with subsequence length 512 , and apply staged training to it, training it with a subsequence length of between 32 to 256 for the first half of training.",
"11 Table 4 shows the results.",
"As in 4.2, where staged training was applied to the unmodified baseline, the results are very robust to the choice of initial stage subsequence length, with all the different choices improving perplexity over the model that does not use staged training.",
"The best model (with initial subsequence length 128), which we call Shortformer, achieves 17.47 dev.",
"set perplexity and trains 65% faster than the baseline.",
"Since its attention matrices are of dimension 512 1 , 024 (the baseline's are 3 , 072 3 , 072 ), our model uses less memory (A.4, appendix).",
"It has the same number of parameters as the baseline.",
"Figure 3 (appendix) compares our best models using each method we presented (and their combination) to the baseline.",
"It shows that combining caching, PIA and staged training (Shortformer) yields the quickest training and best perplexity when using nonoverlapping evaluation.",
"Evaluation speed is similar for all of these models.",
"Finally, Table 5 compares our best models on the test set of WikiText-103 to the state of the art. 12 Shortformer is almost twice as fast to train as the baseline and achieves superior results.",
"Like the 11 We picked 50% of epochs as the length of the first stage since that produced near-optimal results at a fast speed in 4.",
"12 We benchmarked speed, on V100 GPUs, for all models that had publicly available code.",
"best model from 5.3, it is nine times faster than the baseline for token-by-token generation.",
"Since it uses a cache, sliding window evaluation does not increase Shortformer's performance.",
"By training the baseline with staged training (and no PIA or caching), we obtain a model (our best model from 4.2) that, with sliding window",
"eval., obtains even better results, but that model is much slower than Shortformer (Table 5, second-to-last row).",
"Shortformer outperforms the baseline's perplexity and performs within a standard deviation of the Sandwich Transformer (Press et al., 2020) and TransformerXL.",
"It does not outperform the Compressive Transformer (Rae et al., 2020), Routing Transformer (Roy et al., 2020) and kNN-LM (Khandelwal et al., 2020), which make orthogonal improvements that can be applied to any language model, at the price of slower decoding.",
"Combining them with our approach may yield further gains.",
"These results are similar to those we obtain on the Toronto Book Corpus (A.5 in the appendix).",
"Staged Training Devlin et al. (2019) used a staged training routine for BERT by performing the first 90% of training on short subsequences (of length 128 ) before moving on to longer ones (of length 512 ).",
"They use this method to speed training, but we show that also it improves perplexity and analyze different configurations of this method.",
"Many recent papers have explored improving transformer efficiency by reducing the quadratic cost of self-attention, motivated by scaling to longer sequences (Kitaev et al., 2020; Roy et al., 2020; Tay et al., 2020).",
"We instead demonstrate improved results with shorter sequences, which naturally also improve efficiency.",
"One way to reduce transformer memory usage is to sparsify the attention matrix by letting the model attend only to a subset of nearby tokens at each timestep (Child et al., 2019; Beltagy et al., 2020; Roy et al., 2020).",
"Training on shorter subsequence lengths is much more efficient: we use multiple, but much smaller, attention matrices.",
"Since attention uses memory and computation in a way that scales quadratically with input size, splitting the inputs into multiple subsequences each processed independently lets us use less memory and run faster.",
"Like our method, Beltagy et al. (2020) attend at each timestep to a growing number of neighbors as training progresses, but they use five stages, which we found not to be superior to our two-staged method.",
"The adaptive attention span model of Sukhbaatar et al. (2019) learns the maximum effective context window sizes for each head at each layer independently.",
"Like in our method, context window sizes are smaller at the start of training and lengthen as training progresses.",
"We show that a simple approach of manually choosing two subsequence lengths is highly effective.",
"In addition, keeping subsequence lengths equal across all heads and layers lets us save memory and runtime.",
"Position-Infused Attention TransformerXL (Dai et al., 2019) caches and attends to previous representations using an attention sublayer that uses relative positioning (Shaw et al., 2018).",
"It runs much slower than the unmodified attention sublayer, requires extra parameters, and requires internally modifying the self-attention sublayer, while our PIA method (5) does not.",
"In parallel with our work, Ke et al. (2020) compute attention coefficients by summing two attention matrices, one based on position-position interactions and the other on content-content interactions.",
"As in PIA, they do not add position embeddings at the bottom of the model.",
"They present results only for BERT, which uses much smaller subsequences than our models.",
"training on shorter subsequences and then progressing to longer ones via staged training, we improve perplexity and reduce training time.",
"We additionally propose position-infused attention, which enables caching and efficiently attending to previous outputs; we show that models using this method do not require large input subsequences.",
"We finally show that these two methods can be combined to produce a speedier and more accurate model.",
"We thank Tim Dettmers, Jungo Kasai, Gabriel Il-harco, Hao Peng, Sewon Min, Mandar Joshi, Omer Levy, Luke Zettlemoyer, Julian Michael, Edward Misback, Sofia Serrano, Nikolaos Pappas, Jesse Dodge, Myle Ott, and Sam Shleifer for their valuable feedback and fruitful discussions."
] | [
"abstain",
"objective",
"objective",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"result",
"other",
"method",
"method",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"objective",
"other",
"method",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"other",
"objective",
"other",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"result",
"objective",
"result",
"other"
] |
[
"Polysynthetic languages have exceptionally large and sparse vocabularies, thanks to the number of morpheme slots and combinations in a word.",
"This complexity, together with a general scarcity of written data, poses a challenge to the development of natural language technologies.",
"To address this challenge, we offer linguistically-informed approaches for bootstrapping a neural morphological analyzer, and demonstrate its application to Kunwinjku, a polysynthetic Australian language.",
"We generate data from a finite state transducer to train an encoder-decoder model.",
"We improve the model by hallucinating missing linguistic structure into the training data, and by resampling from a Zipf distribution to simulate a more natural distribution of morphemes.",
"The best model accounts for all instances of reduplication in the test set and achieves an accuracy of 94.7% overall, a 10 percentage point improvement over the FST baseline.",
"This process demonstrates the feasibility of bootstrapping a neural morph analyzer from minimal resources.",
"Polysynthesis represents the high point of morphological complexity.",
"For example, in Kunwinjku, a language of northern Australia (ISO gup), the word ngarriwokyibidbidbuni contains six morphs: (1) ngarri-1pl.exclwok-wordyi-COMbid-REDUPbidbu-go.upniPI We were talking as we climbed up' Example (1) illustrates common features of polysynthesis: fusion, incorporation, and reduplication.",
"Fusion combines multiple grammatical functions into a single morph, leading to large morph classes, and challenging the item-and-arrangement leanings of finite state morphology.",
"Incorporation presents a modelling challenge because rule-based methods are unable to enumerate an open class, and machine learning methods need to learn how to recognize the boundary between contiguous large or open morph classes.",
"Reduplication is also a challenge because it copies and prepends a portion of the verb root to itself, requiring a nonlinear or multi-step process.",
"Tackling these phenomena using finite state transducers (FSTs) involves a combination of technical devices whose details depend on subtleties of the morphological analysis (cf. Arppe et al., 2017).",
"There remains a need for more investigation of polysynthetic languages to deepen our understanding of the interplay between the options on the computational side, and the most parsimonious treatment on the linguistic side.",
"Morphological complexity leads to data sparsity, as the combinatorial possibilities multiply with each morpheme slot: most morphologically complex words will be rare.",
"Furthermore, many morphologically complex languages are also endangered, making it difficult to collect large corpora.",
"Thus, polysynthetic languages challenge existing ways of building tools and applications for the communities that speak these languages.",
"In this work we investigate Kunwinjku, spoken by about 2,000 people in West Arnhem in the far north of Australia.",
"Members of the community have expressed interest in using technology to support language learning and literacy development.",
"Thus, we face the challenge of developing useful language technologies on top of robust models, with few resources and in a short space of time.",
"We envisage morphologically-aware technologies including dictionary interfaces, spell checkers, text autocompletion, and tools for language learning (cf. Littell et al., 2018).",
"low resource morph analysis, neural approaches to morph analysis, and data augmentation for morphological reinflection (Sec. 2).",
"Next, we describe our existing finite state model for Kunwinjku verbs (Sec. 3).",
"In Section 4 we present a neural approach which addresses gaps in the previous model, including the ability to analyze reduplication and to exploit distributional information.",
"Next we discuss our evaluation metrics and our handling of syncretism and ambiguity (Sec. 5).",
"Finally, the results are presented in Section 6, including a discussion of how well the neural models address the shortcomings of the FST model.",
"Our contributions include:",
"(a) a robust morphological analyzer for verbs in a polysynthetic language;",
"(b) a method for augmenting the training data with complex, missing structure; and",
"(c) a technique for scoring the likelihood of generated training examples.",
"Finite state transducers (FSTs) are a popular choice for modelling the morphology of polysynthetic languages.",
"Several toolkits exist, including XFST, Foma, and HFST (Beesley and Karttunen, 2003; Hulden, 2009; Linden et al., 2013).",
"Each one is an optimized implementation of the finite state calculus (Kaplan and Kay, 1994), providing additional support for morphosyntactic and morphophonological processes.",
"Most recent work on computational modelling of morphologically rich languages is built on the foundation of these tools (Arppe et al., 2017; Littell, 2018; Andriyanets and Tyers, 2018; Chen and Schwartz, 2018; Cardenas and Zeman, 2018).",
"As a case in point, we applied Foma in the analysis of the morphology of Kunwinjku verbs, but ran into difficulties accounting for out-of-vocabulary (OOV) items in open morph classes.",
"We also stopped short of addressing complex features like reduplication and verbal compounding, for technical reasons related to the expressiveness of FSTs (cf. Lane and Bird, 2019).",
"Recently, neural models have gained popularity for morphological processing because they address some of the weakness of FSTs: subword modeling shows an ability to remain robust in the face of out-of-vocabulary items, and recurrent neural architectures with attention have shown a capacity to learn representations of context which allow the model to incorporate the notion of long-distance dependencies (Bahdanau et al., 2014).",
"Neural morphological analyzers can be developed from training data generated by an FST.",
"These analyzers are more robust, handling variation, out-of-vocabulary morphs, and unseen tag combinations (Micher, 2017; Moeller et al., 2018; Schwartz et al., 2019).",
"They provide 100% coverage, always providing a best guess analysis for any surface form.",
"Of course, FSTs can be modified to accommodate exceptions and OOV morphs, but this requires explicit modelling and usually does not achieve the robustness of neural analyzers (Schwartz et al., 2019).",
"Anastasopoulos and Neubig (2019) found that they could augment their training set by hallucinating new stems, increasing accuracy on their test set by 10 percent.",
"This method involved substituting random characters from the target language's alphabet into the region identified by alignment as the probable root.",
"For the sake of cross-lingual generalizability, their method does not consider language-specific structure.",
"The task of morphological analysis, mapping an inflected form to its root and grammatical specifi-cations, is similar to the task of machine transliteration, mapping a sequence of words or characters from source to target language without reordering.",
"For example in Kunwinjku, consider the segmentation and gloss of the verb karridjalbebbehni : (2) karri-12adjal-justbebbeh-DISTRni sit.NP Let's just sit down separately' [E.497] Since the process of segmenting and glossing the verb does not contain any reorderings, the mapping of surface to glossed forms can be viewed as transliteration.",
"Finite state transducers have long been viewed as an ideal framework to model morphology (Beesley and Karttunen, 2003).",
"They are still a popular choice for low-resource polysynthetic languages (cf. Chen and Schwartz, 2018; Lachler et al., 2018).",
"Here we summarize some features of Kunwinjku and describe the finite state implementation.",
"Kunwinjku is a polysynthetic agglutinating language, with verbs having up to 15 affix slots (Fig. 1).",
"Morphs combine in a way that is almost lego-like (Evans, 2003; Baker and Harvey, 2003).",
"We implement morphotactics and mor-phophonology as separate stages, following usual practice (Fig. 2).",
"However, this is not conducive to modelling noun incorporation, valence-altering morphology, fusion, or reduplication, all typical phenomena in polysynthetic languages.",
"Kunwinjku has two kinds of noun incorporation.",
"General incorporable nouns ( GIN ) are a closed class, manifesting a variety of grammatical roles (3).",
"Body part incorporable nouns ( BPIN ) are an open class, restricting the scope of the action (4).",
"(3) nga-1mkak-nightkeleminjfear.P I was afraid at night' (4) nga-1mbid-handkeleminj fear.P I was afraid for my hand' [E.458] The open class BPIN occupy slot 3 and will be adjacent to the verb root whenever slots 2 and 1 are empty, as is common.",
"With adjacent open class slots, Kunwinjku opens up the possibility of there being contiguous OOV morphs .",
"In Kunwinjku there is no template to help distinguish members of these adjacent classes, thus creating a novel challenge for predicting morph boundaries.",
"While transitivity of the verb is lexically defined, there are three morph classes which signal valency change: the benefactive ( BEN ), comitative ( COM ), and reflexive ( RR ).",
"More details about the respective function of these morphs is given in Lane and Bird (2019), but here it suffices to say their presence in a verb makes resolving valency impossible without wider sentential context.",
"This impacts the FST modelling, as we are unable to restrict possible illegal analyses on this basis, which results in overgeneration.",
"Morphological fusion can lead to a proliferation of morphs and analyses.",
"In Kunwinjku, there are no fewer than 157 possibilities for the first slot of the verb, fusing person and number (for both subject and object) along with tense.",
"We find that this fusion affects decisions around tokenization of the data in preparation for training the seq2seq model (Sec. 4.2).",
"Most of the world's languages employ reduplication productively for diverse purposes (Rubino, 2005).",
"It is a common feature of polysynthetic languages in particular.",
"While modelling reduplication using FSTs is possible, the general consensus is that modelling partially reduplicative processes explode the state space of the model, and are burdensome to develop (Culy, 1985; Roark et al., 2007; Dras et al., 2012).",
"For these reasons, the Kunwinjku FST model does not include an implementation of the language's complex reduplication system.",
"In Kunwinjku, there are three types of verbal reduplication: iterative, inceptive, and extended.",
"Each type of reduplication has 13 (CV) templates which can be applied to the verb root to express the semantics associated with each type.",
"In Section 4.4 we discuss an approach to ensure that the neural model handles Kunwinjku's complex reduplication system.",
"We establish a baseline by scoring the FST on a set of n = 304 inflected verbs.",
"The data was collected from the Kunwinjku Bible (which targets a modern vernacular), a language primer (Etherington and Etherington, 1998), and a website (Bininj Kunwok Language Project, 2019).",
"The data was glossed in consultation with language experts.",
"We define coverage as number of analysed forms, and accuracy as the number of correctly analyzed forms, both as a fraction of n .",
"We define precision Accuracy Coverage Precision FST 84.4 88.5 95.4 Figure 3: All-or-nothing accuracy and coverage of the Kunwinjku FST Analyzer on the test set of 304 inflected verbs.",
"as the number of correctly analysed forms as a fraction of the number of analysed forms.",
"We distinguish accuracy and precision because the ability of a model to withhold prediction in case of uncertainty is useful in certain application contexts.",
"The results of the evaluation show that while the FST is fairly high-precision, its accuracy is limited by the imperfect coverage of verb stems in the lexicon (Fig. 3).",
"The FST relies on a lexicon to provide analyses for inflected forms, and when it comes across OOV morphs, or verb stems modified by processes like reduplication, it fails to return an analysis.",
"We sort the coverage issues into classes, and remark that the largest source of error comes from reduplication, followed by variation in tense/aspect/mood (TAM) inflection, OOV stems, OOV incorporated nominals, and exceptions to the d-flapping alternation rule (Fig. 4).",
"We address each of these problems in the following sections.",
"In this section we discuss the approach which leverages an incomplete FST to produce a more robust neural morphological analyzer for Kunwinjku.",
"Those steps include generating training pairs from an FST, tokenizing the data, resampling from the dataset to simulate distributional signal, hallucinating missing structures into the dataset, and training a neural encoder-decoder model on the resampled data.",
"Given our low resource setting, training a neural encoder-decoder model like those used in neural machine translation (NMT) is not possible without augmenting what resources we do have.",
"Following the established template of recent work on neural morphological analysis for low resource polysynthetic languages (Micher, 2017; Moeller et al., 2018; Schwartz et al., 2019) we use the FST model to generate morphotactically valid pairs of surface and analyzed verbs.",
"For the purpose of training the base neural model, we adapted the Foma tool to randomly generate 3,000,000 surface/analysis pairs from the FST (see Fig. 6 for an example of a tokenized pair).",
"An automatic process removed duplicates, leaving us with 2,666,243 unique pairs which we partitioned into an .8/.1/.1 train/dev/test split.",
"In Schwartz et al. (2019)'s work on modelling complex nouns in Yupik, they generate a training set which exhaustively pairs every Yupik noun root with every inflectional suffix, regardless of the resulting semantic fidelity.",
"In our case, it was not feasible to exhaustively generate the training data, as it would have led to 4 .",
"9 10 12 instances (Fig. 5).",
"In effect, the training set represents .00004% of the space over which we seek to generalize.",
"To prepare the data for training a seq2seq model, we first collect the glossed inflected verb forms, perform tokenization, and organize them into source-target pairs.",
"We chose a tokenization scheme which treats graphemes as atomic units.",
"Morph labels are also treated mostly as atomic units, with the exception being for fused labels which we break into their individual linguistic components (Fig. 6).",
"For example the pronominal morph in Kunwinjku can simultaneously express both subject and object, as well as tense.",
"Consider the pronominal prefix kabenbenewhich we gloss as 3sg.3ua.nonpast and tokenize as [ 3sg . 3ua . nonpast ] .",
"Choosing to break up labels in the fused morphological slots prevents an unnecessary proliferation of entries in the target vocabulary, as individual units like 3sg , 3ua , and past can be shared by multiple pronominals.",
"Our choice to tokenize the source forms and verb root strings at the grapheme level reflects our desire to loosen the model's vocabulary such that it is TSO DIR ASP MSC1 BEN MSC2 GIN BPIN COM root RR TAM Total 157 x 3 x 2 x 24 x 2 x 4 x 78 x 32 x 2 x 541 x 2 x 5 = 4 .",
"equipped to handle variation at the orthographic level, and possible OOV stems.",
"Generating from an FST at random fails to capture valuable information about the distribution of morphs.",
"For example in Kunwinjku, body part incorporable nouns ( BPIN ) can occur adjacent to the verb root.",
"Both categories are open class, meaning that there is a high likelihood in the low-resource setting that either or both are out-of-vocabulary.",
"How then does the analyzer decide where to place the boundary?",
"Perhaps the entire sequence is a single out-of-vocabulary root.",
"Our intuition is that knowing the likelihood of co-occurrence for two analysis tags can provide signal to help disambiguate.",
"Some morph sequences are inevitably more frequent than others, and we would like to represent that information in the training set.",
"To this end, we propose a method for simulating distributional information in the training set.",
"First, we want to score any analyzed form, giving higher scores to forms that contain more likely sequences.",
"We define M as the sequence of morph tags which make up an analysis, where m i is the morph tag at index i .",
"The scoring function is defined as follows: (5) score ( M ) = 1 n n (cid:80) i log P ( m i , m i +1 ) The joint probability of adjacent tags is estimated from a corpus of unannotated text, here, selected books from the Kunwinjku Bible.",
"Everything the existing FST can analyse as a verb is considered to be a verb, and is used to calculate the joint probability table.",
"The training set is tagged with the FST 1 , and ranked according to the scoring function.",
"We split the sorted data into buckets defined by their morphotactic likelihood, and then sample from them according to a Zipf distribution.",
"The effect is that more probable sequences are more likely to occur in the training data than less likely examples, thus approximating the distribution of morphotactic structure we would expect to see in a natural corpus.",
"One shortcoming of the Kunwinjku FST model is that it does not account for reduplicative structure, due to the complexity of modelling recursive structure in the linear context of finite state machines (Culy, 1985; Roark et al., 2007).",
"As noted previously, reduplication is responsible for 28.9% of the FST's coverage error when evaluated on the test set of inflected verbs.",
"If reduplication is not modeled by the FST, then reduplication will also not be represented in the training set generated by that FST.",
"We posit that if data hallucination has been shown to improve performance in the language-agnostic setting (Anastasopoulos and Neubig, 2019; Silfverberg et al., 2017), than it is likely that linguistically-informed hallucination can provide a similar reinforcement in Kunwinjku.",
"In line with this, we developed an extension to the data generation process which hallucinates reduplicative structure into a subset of the training data.",
"Kunwinjku has three main types of partial verbal reduplication signaling iterative, inceptive, and extended meaning.",
"Moreover, each type of reduplication can have more than one CV template, depending on which paradigm the verb belongs to.",
"Figure 7 documents the three types of reduplication, and serves as the template for the reduplicative structure hallucinator.",
"First, the hallucinator module samples n% of the FST-generated pairs and strips away the affixes to isolate the root.",
"For each root, one of the three reduplication types (iterative, inceptive, or extended) is selected at random, and the root is matched against the available CV templates.",
"The longest pattern which matches the root is selected, and the pattern-matching portion of the root is copied and prepended to the root.",
"Both the surface and analyzed form are updated to reflect the change, and the new training pairs are appended to the original list of FST-generated pairs.",
"We trained an encoder-decoder model on the dataset of 2,114,710 surface/analyzed form pairs (the Base model).",
"We then hallucinate reduplication into 8% of the Base data, and b i k a nj ng u n e ng > [ 3sg . 3Hsg . PST] [ BPIN ] ng u [ PP ] Figure 6: An example of a tokenized source/target training pair, where we treat source graphemes, target labels, fused target label components, and verb root graphemes as atomic units.",
"The model setup is similar to the one described in (Schwartz et al., 2019).",
"We use MarianNMT: a fast, open-source toolkit which implements neural models for machine translation (Junczys-Dowmunt et al., 2018).",
"We used a shallow attentional encoder-decoder model (Bahdanau et al., 2014) using the parameters described in (Sennrich et al., 2016): the encoder and decoder each have 1 hidden layer of size 1024.",
"We use cross-validation as the validation metric, set dropout to .2 on all RNN inputs, and enable early stopping to avoid overfitting.",
"We use the same setup and parameters for all NMT models mentioned in this paper.",
"A full accounting of the MarianNMT settings used can be seen in the Appendix.",
"We begin by reporting the performance of the neural models in terms of coverage, accuracy, and precision, so that they can be compared with the evaluation of the FST model, described in Section 3.2.",
"Additionally, we measure the performance of the neural models in terms of precision (P), recall (R), and F1 on the morph level: For each morph tag in the gold target test set, we calculate P, R, and F1, and then calculate the macro-average P, R, and F1 across all tags in the test set (Fig. 9).",
"This method is more granular than all-or-nothing accuracy over the entire translated sequence, and allows us to get a better picture of how the models are doing on the basis of individual tags.",
"noted by Schwartz et al. 2019; Moeller et al. 2018).",
"For example, the pronominal prefix kabindican be glossed: [3ua.3ua.nonpast] , or [3pl.3ua.nonpast] , or [3ua.3pl.nonpast] , or [3pl.3pl.nonpast] .",
"Here, the pronominal expresses both the subject and object, and is not explicit whether that subject or object is the 3rd person dual or plural, in any of four possible combinations.",
"The disambiguation cannot be resolved at the level of the isolated verb.",
"Our initial experiment with the base data set achieved 100% coverage and 68.3% accuracy on the test set.",
"When confronted by the same problem, Moeller et al. (2018) decided to collapse ambiguous tags into an underspecified meta-tag.",
"For example, for the Kunwinjku data, we might collapse the four tags above into [3pl.3pl.nonpast] .",
"However, doing so results in a potential loss of information.",
"Given the wider sentential context, the pronominal could be possibly be disambiguated, so long as the distinction is preserved and all equally-valid analyses are returned.",
"Further, as Schwartz et al. (2019) point out, in the Yupik language it is possible for this ambiguity to exist across other categories which are not easily collapsed.",
"In Kunwinjku, an example of this would be the pronominals [1sg.2.past] and [3sg.past] which differ in terms of number and valency, and yet share the same null surface form.",
"Their differences are such that they can not be easily collapsed into a single meta-tag.",
"Therefore we do not penalize the model for producing any variation of equally valid analyses given the surface form, and for each model we adjust the evaluation for syncretism in a post-processing step.",
"All of the neural models outperform the FST in terms of accuracy and coverage (Fig. 8).",
"However, the FST is more precise, and this may be useful in certain application contexts.",
"The best model is Base+halluc+resample, which improves on the FST by 10.3 percentage points.",
"On the morph-level, we see that the neural models containing the hallucinated reduplication data outperform the base neural model (Fig. 9).",
"Precision Recall F1 88.8 89.9 89.0 91.6 92.6 91.8 93.7 93.6 93.4",
"We posited that the difficulties encountered by the FST modelnamely reduplication, out-of-vocabulary items, and spelling variationcould be at least partially addressed by training a neural model on character and tag sequences, and hallucinating instances of reduplication into the training set.",
"For the most part, this held true, as we see gains across all error classes (cf. Sec. 3.2).",
"Here we report performance with respect to the three largest error classes: reduplication, OOV verbs, and OOV nouns.",
"As expected, neither the FST nor the Base neural model succeeds in recognizing reduplication.",
"It would be impossible, as the REDUP tag does not appear in either of their vocabularies.",
"The Base+halluc model's performance gain over the Base model can be accounted for entirely by the fact that it achieved 100% recall of reduplicative structure.",
"Precision, on the other hand was 57.9%.",
"Looking at the errors, we find that the imprecise predictions were all applied to instances about which the system was already wrong in previous Unseen Verbs Base+halluc+resample (cid:88) / (cid:55) wobekka ng [GIN]bekka (cid:55) nga kohbanjm inj [GIN][REDUP]me (cid:55) nga rrukkendi dukkendi (cid:88) ka menyime [GIN]yime (cid:55) yimalng darrkiddi darrke[PERSIST] (cid:55) ngam dolkka ng [DIR][GIN]ka (cid:55) dolkka ng [GIN]ka (cid:55) ka rrukmirri dukmirri (cid:88) ngurrimirnde mornname rren mornname (cid:88) Unseen GIN/BPIN/ASP Base+halluc+resample (cid:88) / (cid:55) kan njilng marnbom [GIN] (cid:55) yiben kange marnbom [REDUP] (cid:55) kan kange murrngrayekwong [GIN] (cid:55) kankange murrng rayekwong [BPIN] (cid:88) kankangemurrng rayek wong [REDUP] (cid:55) kan kange marnbom [REDUP] (cid:55) ngarri bangme marnbuyi [BPIN] (cid:55) yi malng darrkiddi [GIN][REDUP] (cid:55) Figure 10: Column 1 shows the list of verbs and nouns (in bold) which are are unseen in the FST lexicon.",
"models, meaning that the impact of reduplicative hallucination between models was only positive.",
"In the Base+halluc+resample model, recall of reduplicative structure was also 100%, and precision increased slightly to 58.8%.",
"The neural models correctly identify some unseen verb stems, but still show room for improvement.",
"We observe a tendency across all neural models to predict verb stems which have been seen in training, and which are also a substring of the observed unknown root.",
"For example, the training set does not contain any verbs with the root dolkka , but it shows up 3 times in the test set.",
"The analyses of all dolkka -rooted verbs were the same in both the Base+halluc and Base+halluc+resample models: they propose ka , a known root from the training set, and presume dolkto be an incorporable noun 2 .",
"Figure 10 shows a sample of OOV verb stems and nouns from the test set.",
"In the unseen verbs table, this behavior of preferring previously observed verb stems is the cause of error in every case.",
"Further difficulty comes in distinguishing between general ( GIN ) and body-part ( BPIN ) incorporated noun classes.",
"The low rate of success in positing unknown incorporated nouns is, in 2 Possibly by virtue of its orthographic proximity to bolk, a common general incorporable noun which means land. large part, attributed to the fact that the large GIN and open BPIN classes often occur adjacent to each other and to the root.",
"The neural model has difficulty making useful predictions when multiple morphs in this region are previously unobserved.",
"Overall, the Base+halluc+resample model correctly posited 33% of unseen stems, and 12.5% of unseen nouns from the FST error analyses.",
"This technique to approximate distributional information led to a small improvement in overall accuracy, and in tag-level P/R/F1.",
"We had expected that this information might help the neural models learn something about the relative frequencies of GIN s or BPIN s, which could help make decisions about how to draw the boundary between unseen stems and unseen incorporated nominals.",
"Instead, we saw distributive information helped to disambiguate the boundaries between morph classes with fewer members.",
"One representative example is the case of yiki-mang , whose root is kimang .",
"Before resample, the neural models interpret the yias the comitative prefix yi, and injects a spurious COM tag into the analysis.",
"After resample, it correctly omits the COM tag, interpreting yias the 2nd person singular pronominal.",
"In the unfiltered FST-generated training data, COM occurs in 53% of instances.",
"In the resampled data, it occurs in 22% of instances.",
"When all morph labels are equally likely to occur, the model is just as likely to predict any morph label compatible with the character sequence.",
"Resampling the training data according to a more realistic distribution leads to stronger morph transition priors, which tip the scale in favor of the analysis with a more likely tag sequence.",
"We have shown that complex features of polysynthetic morphology, such as reduplication and distributional morphotactic information, can be simulated in the dataset and used to train a robust neural morphological analyzer for a polysynthetic language.",
"In particular, we showed that a robust neural model can be bootstrapped in a relatively short space of time from an incomplete FST.",
"This work represents a successful first iteration of a process whereby the morphological model can be continually improved.",
"Indeed, the concept of bootstrapping a model implies an iterative development story where much of the scaffolding used in early efforts will eventually fall away.",
"For example, once the bootstrapped model has been used to tag verbs containing reduplication, we can confirm the model's high-confidence predictions and retrain.",
"In this second iteration, we may find that we no longer need to hallucinate reduplication because it is sufficiently represented in the new training set.",
"Similarly, once we have applied the complete neural model to a corpus of natural text, we will no longer need to approximate distributional information.",
"For researchers developing robust morphological analyzers for low resource, morphologically complex languages, this work represents a template of model development which is well-suited for the context.",
"Producing a viable morphological analyzer is the first step towards building improved dictionary search interfaces, spell-checking tools, and computer-assisted language learning applications for communities who speak low-resource languages.",
"The pattern of training robust systems on data that has been augmented by the knowledge captured in symbolic systems could be applied to areas outside of morphological analysis, and is a promising avenue of future exploration.",
"We are grateful for the support of the Warddeken Rangers of West Arnhem.",
"This work was covered by a research permit from the Northern Land Council, and was sponsored by the Australian government through a PhD scholarship, and grants from the Australian Research Council and the Indigenous Language and Arts Program.",
"We are grateful to four anonymous reviewers for their feedback on an earlier version of this paper."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"abstain",
"method",
"objective",
"method",
"objective",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"A commonly observed problem with the state-of-the art abstractive summarization models is that the generated summaries can be factually inconsistent with the input documents.",
"The fact that automatic summarization may produce plausible-sounding yet inaccurate summaries is a major concern that limits its wide application.",
"In this paper we present an approach to address factual consistency in summarization.",
"We first propose an efficient automatic evaluation metric to measure factual consistency; next, we propose a novel learning algorithm that maximizes the proposed metric during model training.",
"Through extensive experiments, we confirm that our method is effective in improving factual consistency and even overall quality of the summaries, as judged by both automatic metrics and human evaluation.",
"Recent advances in neural text generation have led to significant improvement in the quality of abstractive summarization (Radford et al., 2019; Gehrmann et al., 2019; Lewis et al., 2019).",
"Despite this progress, there are still many limitations facing neural text summarization (Kryscinski et al., 2019), the most serious of which is the tendency to generate summaries that are not factually consistent with the input document; a factually consistent summary only contains statements that can be inferred from the source document.",
"Recent studies show that about 30% of the summaries generated by neural network sequence-to-sequence (seq2seq) models suffer from fact fabrication (Cao et al., 2018).",
"The standard training approach for seq2seq learning has been maximizing the log likelihood of the target given the input sequences (MLE).",
"It has empirically performed well as a surrogate loss Input: ...Klitschko doesn't have the legs, the power that he used to, said Lewis.",
"for evaluation metrics such as BLEU and ROUGE.",
"This empirical success can be ascribed to the fact that both BLEU and ROUGE are directly linked to the n-gram overlap between the output and the target sequences, which can be efficiently learned via MLE.",
"In contrast, metrics to capture factual consistency are much more elusive as they must take into account the relations among tokens in the context of an entire sequence.",
"The widely used ROUGE score is inadequate to quantify factual consistency (Kryscinski et al., 2019).",
"In fact, the lack of an effective (automatic) metric for factual consistency has been the major hurdle in improving abstractive summarization model training beyond MLE.",
"Table 1 shows an example of a factually inconsistent summary generated by fine-tuning the BART-large model (Lewis et al., 2019), which is a transformer based seq2seq model pre-trained on a large corpus with denoising objectives.",
"Standard MLE training produces summaries with factual errors that, in addition to hallucinating facts, sometimes even contradict the input article.",
"To make abstractive summarization models produce more factually consistent summaries, we need two critical components: an automatic evaluation metric for factual consistency and an effective training algorithm that maximizes Figure 1: Comparison between QAGS (top) and QUALS (bottom) protocols.",
"factualness.",
"Our main contributions lie in both areas.",
"First, we propose an efficient automatic evaluation metric for factual consistency that is a simplification of the recently published QAGS protocol (Wang et al., 2020).",
"Evaluating QAGS is computationally expensive and ill-suited for being part of the model training process.",
"Our proposed protocol achieves a 55x speedup while correlating closely with QAGS 1 .",
"Second, we propose a new contrastive learning method that uses factualness as a training objective.",
"We demonstrate through experiments that our method improves the factual consistency of summarization models measured by both automatic metrics such as QAGS as well as human evaluation.",
"In order to improve factual consistency of summarization models, we must have a metric to quantify it.",
"In addition, the metric needs to be computationally efficient so that we can incorporate it as part of the model training process.",
"We first describe the QAGS protocol and then present our QUALS protocol.",
"Given a summary and an input document, QAGS (Wang et al., 2020) scores the summary using a 4-steps pipeline: firstly, it extracts the named entities and noun phrases in the summary as",
"1 See Sec.",
"A.2 in the Appendix for details.",
"candidate answers using an answer extraction (AE) model; secondly, a question generation (QG) model takes in the summary, concatenating with each candidate answer to generate a corresponding question; thirdly, a question answering (QA) model is used to answer each generated question in the context of the summary and the input document, separately; finally, the answers from the QA model based on the summary and the input document are compared to calculate F1 score in terms of their word level overlap as the QAGS score.",
"Intuitively, for the same question, if the answer obtained from the input document matches that from the summary, it is an indication that the summary is factually consistent with the input document.",
"We show the QAGS pipeline in the top part of Figure",
"1. QAGS has the advantage of being interpretable and is shown to correlate well with human evaluation.",
"However, using QAGS directly as a part of the training process presents several challenges.",
"First, QAGS requires three separate models for AE, QG and QA.",
"In addition to the summarization model being trained, these models consume a significant amount of machine memory.",
"Second, performing these three steps separately takes a significant amount of time.",
"For good coverage in QAGS, multiple answers are extracted for a given summary and multiple questions are generated for each answer.",
"This means the QA model needs to perform inference on an exploding number of inputs even for one summary.",
"Indeed, QAGS evaluation on a training set would take 584 days on a single GPU.",
"2 2.2 QUALS (ours) In order to enable the use of a QA driven metric to maximize factual correctness during the training of summarization models, we propose QUALS (QUestion Answering with Language model score for Summarization), which is illustrated in the bottom part of Figure",
"1. QUALS is an efficient 2 See Sec.",
"A.2 in the Appendix for details.",
"metric that employs a single neural language model (QAGen), as proposed in (Shakeri et al., 2020), to generate both the questions and answers from the summary.",
"In particular, given a summary, QAGen outputs a question-answer (q-a) pair jointly, separated by a special token <a> as shown in Figure",
"2. Let LL summ ( q, a ) be the average log likelihood of generating the q-a pair from the given summary: LL summ ( q, a ) = 1 N q + N a N q (cid:88) i =1 log p QAGen ( q i | summ , q <i ) + N a (cid:88) i =1 log p QAGen ( a i | summ , q, a <i ) (cid:33) , where N q and N a are the number of tokens for the question and answer, respectively.",
"Note that we consider the log likelihood scores over both the question and answer tokens to account for factual consistency of both.",
"To obtain good coverage and diverse q-a pairs, we use diverse beam search (Vijayakumar et al., 2016) to generate 60 q-a pairs for a given summary with 60 diverse beam groups and a diverse beam strength of 0.5.",
"We then filter out low-quality q-a pairs by keeping only those with answers found in the input summary.",
"When multiple q-a pairs share the same answer, we only select the pair with the highest LL summ ( q, a ) .",
"Then given the input document, we simply evaluate the average log likelihood of the QAGen model producing the same q-a pairs, denoted as LL doc ( q, a ) .",
"Formally, given a summary and input document, QUALS score is computed as follows: QUALS ( doc , summ ) = 1 MM (cid:88) i =1 ( LL doc ( q i , a i ) LL summ ( q i , a i )) , where M is the number of q-a pairs selected on the summary.",
"There are two justifications for taking the difference between the log likelihood scores.",
"1. LL doc ( q, a ) alone only indicates the likelihood of the q-a pair given the document; subtracting LL summ ( q, a ) baselines it with the likelihood of generating the q-a pair given the summary.",
"E.g. a low LL doc ( q, a ) does not necessarily imply factual inconsistency it can be caused by the fact that the q-a pair itself is generated with low likelihood from the summary in the first place.",
"2. Documents may vary in style, vocabulary and topic, which lead to variations in log likelihood scores unrelated to factual consistency; LL doc ( q, a ) LL summ ( q, a ) can help normalize these domain-related shifts since both the document and summary share the same basic style, vocabulary and topic.",
"Although QUALS can be computed more efficiently, using it in the training process is not straightforward because one would need to backpropagate through generated sumaries and qa pairs.",
"We present our CONSEQ (CONtrastive SEQ2seq learning) algorithm that can effectively maximize such metrics in training.",
"To fix notation, x = x 1 , . . . , x m denotes a sequence of input tokens; y = y 1 , . . . , y n denotes a sequence of target output tokens; y = y 1 , . . . , y n denotes a sequence of generated tokens from a seq2seq model via sampling, i.e. y p ( | x ) , where is the parameter of the model.",
"Let r ( y, x ) be the evaluation (in our case the QUALS ) metric that we aim to maximize.",
"First, we train an initial seq2seq model with parameters 0 using the original labeled training set { x ( i ) , y ( i ) } via MLE.",
"Second, we collect ground truth labeled training target sequences y ( i ) as well as the sampled sequence y ( i ) to form a set of candidate sequences S = { y ( i ) , y ( i ) } .",
"Third, we construct S + and S from S based on the evaluation scores r and minimize the following loss function from the initial parameters 0 : L contrast = E x,s S + log p ( s | x ) (cid:124) (cid:123)(cid:122) (cid:125) L + contrast (1) E x,s S log (1 p ( s | x )) (cid:124) (cid:123)(cid:122) (cid:125) L contrast .",
"Intuitively, S + consists of highly rewarded sequences (factually consistent summaries) and minimizing L + contrast forces the model to generate high score sequences; likewise, S consists of poorly rewarded sequences (factually inconsistent summaries) and minimizing L contrast forces the model to move away from low score sequences.",
"We present the full method in Algorithm",
"1. Comparison with REINFORCE: The typical approach to directly optimize a nondifferentiable evaluation score during training Algorithm 1: CONSEQ Input: Initial seq2seq (summarization) model weights 0 via MLE, input and target sequences { x ( i ) , y ( i ) } , evaluation metric r .",
"where b is a baseline sequence, conditionally independent of y given , x .",
"To see the connection with CONSEQ , suppose the reward r is either 0 or",
"1. If r ( y, x ) = 1 and r ( b, x ) = 0 , the sampled sequence y is strongly rewarded compared to baseline and Eq.",
"2 reduces to (cid:53) log p ( y | x ) .",
"On the other hand, if r ( y, x ) = 0 and r ( b, x ) = 1 , the sampled sequence is strongly discouraged and Eq.",
"2 reduces to (cid:53) log p ( y | x ) , which pushes the model away from generating y .",
"This pull-and-push effect is analogous to the L + contrast and L contrast terms in the loss Eq.",
"1 in CONSEQ .",
"Note that the gradient updates of REINFORCE are entirely based on the sampled sequences.",
"In contrast, CONSEQ takes advantage of the ground truth targets in addition to the sampled ones, which help avoid the instability of REINFORCE.",
"Indeed, we implemented the REINFORCE algorithm with the BART-large model fine-tuned under MLE objective as initialization; we found that after a few hundred updates the summaries sampled from the model become unintelligible and our reward function fails to compute the scores (no meaningful q-a pairs can be generated based on the summaries).",
"We use QUALS to select high quality positive and negative examples for CONSEQ with the goal of",
"training seq2seq summarization models that are more factual.",
"In order to create S + and S we first evaluate QUALS for all the ground truth summaries of the training set and select p % of those with the highest QUALS scores to form S + .",
"3 To generate the negative samples, we use the topK sampling ( k = 50 ) during decoding to generate 6 summaries for each input document in the training set; we then select the one with the lowest QUALS score out of the 6 summaries for each input document; next, we rank the selected summaries and choose p % of those with the lowest QUALS scores to form S .",
"Note that the summaries in S + and S may correspond to different input documents.",
"The last step is to take the intersection of the examples between S + and S to form S + and S , respectively.",
"For example, we select a summary s from S + to be included in S + if and only if there exists a summary s (cid:48) in S such that s and s (cid:48) correspond to the same input document.",
"As a result of the above process, the contrastive loss in Eq.",
"1 can thus push the model from the inconsistent summary towards the consistent one for the same input document.",
"Next, we describe two variants of the CONSEQ algorithm.",
"Weighted loss: We can weight the losses in Eq.",
"1 using QUALS scores and minimize the following loss, assuming normalization of 0 r 1 : L contrast = E x,s S + r ( s, x ) log p ( s | x ) E x,s S (1 r ( s, x )) log (1 p ( s | x )) , where r ( s, x ) is the QUALS score for summary s and input document x .",
"Online learning: We refer to Algorithm 1 as the offline training setting in the sense that in each iteration, S + and S are constructed by pooling together all available input documents and their candidate summaries to train the model.",
"It is also possible to perform training in an online fashion.",
"Specifically, we can take in a batch of input sequences in each iteration, construct S + and S based only on the examples in the batch, and take a gradient step with respect to Eq.",
"1. Compared to the offline setting, the model parameters are updated much more frequently and 3 We found it necessary to select the top p % of the ground truth summaries to form S + because not all ground truth summaries are factually consistent to the input documents, due to the imperfect data collection process.",
"This is especially true for the XSUM dataset as we discuss in the next section.",
"the candidate sequences are always generated from the latest model parameters.",
"On the other hand, the construction of S + and S are restricted to the examples within the batch, resulting in potentially less representative samples compared to the offline setting.",
"Datasets: We perform our summarization experiments on two widely used news datasets: XSUM (Narayan et al., 2018) and CNN/DailyMail (Nallapati et al., 2016).",
"The XSUM dataset consists of short, one-sentence summaries of the BBC news articles.",
"The dataset is constructed by taking the first sentence of an article as the summary and the rest of the article as input document.",
"As a result, the summaries are highly abstractive.",
"At the same time, there are many examples where a summary contains information (e.g. the first name of a person) that is not mentioned in the input document.",
"This introduces an undesirable bias in the training data to encourage the model to hallucinate.",
"The CNNDM dataset contains multi-sentence (4 sentences on average) summaries of news articles from the CNN and DailyMail.",
"The summaries are curated by human annotators in terms of highlights of the article.",
"Compared to XSUM, the summaries in CNNDM are much more extractive each summary sentence usually corresponds to an existing sentence in the input document.",
"Evaluation metrics: We use the ROUGE (Lin, 2004) to measure general summarizaiton quality.",
"For factual consistency, we use the QAGS protocol (see Appendix for more details) as well as the FactCC model (Kryscinski et al., 2019) downloaded directly from the official website.",
"4 In contrast to QAGS, FactCC is a BERT-based classification model that makes a binary prediction if the given claim sentence is factually consistent or not with the given input document.",
"Implementation details: We use the Fairseq (Ott et al., 2019) implementation of BART-large (Lewis et al., 2019) for the summarization model as it is shown to achieve the state-of-the-art ROUGE scores for this task.",
"We fine-tune the BART-large model with the standard learning rate of 3 10 5 4 https://github.com/salesforce/factCC Figure 3: Correlation between QUALS and QAGS on XSUM (left) and CNNDM (right).",
"on XSUM and CNNDM respectively to establish the MLE baselines.",
"We then initialize CONSEQ with the MLE baseline models.",
"In CONSEQ we use a learning rate of 3 10 6 .",
"For evaluation, we generate summaries using beam search with beam sizes of 4 and 6 for CNNDM and XSUM, respectively.",
"The generated summaries are limited to 55-140 and 10-60 tokens in lengths for CNNDM and XSUM, respectively.",
"Our QAGen model in QUALS is also a BART-large model fine-tuned on the SQUAD (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2017) datasets.",
"To construct the S + and S , we found that selecting the p = 30 % and 50 % leads to the best result on the validation set of XSUM and CNNDM, respectively, among the choices of p = 25 , 30 , 50 , 75 , 90 .",
"We first verify that our proposed QUALS metric correlates well with QAGS.",
"We evaluate both QUALS and QAGS on the same set of summaries generated by the MLE baseline model on the test set of documents in XSUM and CNNDM, respectively.",
"The examples are grouped into bins based on the percentiles of the QUALS scores.",
"We then plot the average QAGS score of the examples within each bin.",
"As shown in Figure 3 (a more fine-grained plot is shown in Figure 4 of the Appendix), QUALS correlates very well with QAGS in both datasets.",
"Since our method only relies on ranking QUALS scores in contrastive learning, monotonicity of QUALS with respect to QAGS is sufficient.",
"We compare our proposed method QUALSCONSEQ to the state-of-the-art abstractive summarization model (BART-large MLE).",
"In an ablation study, we check the effect of changing the QUALS metric as well as the effect of changing the CONSEQ algorithm.",
"We summarize the results in Table 2 and Table 3.",
"We observe that our proposed method QUALS-CONSEQ ( Q-C ) achieves more than 4 points improvement in QAGS over the MLE baseline in XSUM and about 2 points improvement in CNNDM, where we also achieve a slightly better ROUGE over MLE.",
"Improving ROUGE is not the goal of our paper; what we show is that we can significantly improve factual consistency of summaries without degrading ROUGE, as is common practice (Kedzie and McKeown, 2019).",
"Next, we describe the various ablation settings.",
"1) In R-C (ROUGE-CONSEQ ), we simply use the sum of ROUGE-1,2,L scores to evaluate the generated summaries against the ground truth summaries as the metric in constructing S + and S .",
"In both Table 2 and 3 it results in poorer QAGS than the MLE baseline.",
"This confirms the necessity of having an effective metric for factual consistency.",
"Note that R-C even results in poorer ROUGE scores.",
"We believe this is caused by the fact that ROUGE is already highly optimized by the MLE model and it is used as initialization for R-C ; the hard examples where the MLE model couldn't produce good ROUGE scores may be inherently problematic (e.g. hallucination in the ground truth summary); focusing on these examples by R-C can therefore make the model weaker on other examples.",
"2) In Q-F1-C (QUALS -F1-C ONSEQ ), we make a modification to QUALS .",
"Instead of measuring the factual consistency in terms of the log likelihood scores, we measure the F1 between generated answers from the summary and the input document in the QAGen model.",
"In particular, given a summary as input, the QAGen model generates a q-a pair q, a .",
"We then use the corresponding document as input to the QAGen model and force the decoder to generate the question tokens q and allow the QAGen to generate the answer tokens a (cid:48) .",
"We then compute the F1 overlap score between a and a (cid:48) .",
"This would be closer to the QAGS setting where explicit answers are generated and compared.",
"We observe that in Table 3, Q-F1-C achieves a slightly higher QAGS than Q-C .",
"But overall Q-F1-C performs worse than Q-C .",
"We believe this is due to the fact the log likelihood scores are softer than F1 and can potentially account for answers that are semantically similar.",
"3) In Q-C-W (QUALS-CONSEQ -Weighted), we use the weighted version of CONSEQ as described in Sec. 3.",
"Since the QUALS score is a difference between log likelihood scores, it can have negative values.",
"We evaluate the QUALS on the training examples to obtain an interval of its values and linearly normalize the QUALS as weights in the loss function.",
"We observe that it improves the factual consistency over the MLE baseline but not as much as Q-C .",
"4) In Q-C-O (QUALS-CONSEQ -Online), we use the online version of CONSEQ as described in Sec. 3.",
"We sample about 6 examples in a mini-batch and select 2 of them for S + and S per GPU with a total of 40 GPUs.",
"We observe that it tends to achieve higher ROUGE scores but lower factual consistency scores compared to Q-C .",
"5) In Q-P (QUALS -Positive), we only use the positive summaries ( S + ) and the positive loss L + contrast in Eq.",
"1 for training.",
"We observe that it achieves lower factual consistency scores compared to Q-C and this shows that the negative loss in CONSEQ is useful to boost factual consistency.",
"FactCC results: As shown in Table 3 for CNNDM, Q-C achieves over 4 points improvements in FactCC score over the MLE baseline.",
"However, in Table 2 for XSUM, Q-C has about 1 point lower FactCC score than the MLE baseline.",
"We investigated this issue and found that the ground truth summaries of the XSUM test set have a FactCC score of just 21.0, which means that only 21% of the ground truth summaries in XSUM are judged as factual according to FactCC.",
"This suggests that the FactCC model is not well suited for making predictions on highly abstractive summaries.",
"This is not surprising as the authors of FactCC mentioned in Sec. 3.1 (Kryscinski et al., 2019) that FactCC is built on the premise that ..the level of abstraction of generated summaries is low and models mostly paraphrase single sentences and short spans from the source.",
"Unfortunately for XSUM, this premise does not hold.",
"Comparison with Other Methods: There are 2 other methods in the literature (Cao et al., 2018; Zhu et al., 2020) for improving factual consistency of summarization models.",
"Both rely on information extraction (OpenIE) to extract relations and incorporate the relation representations into seq2seq models.",
"The authors in (Zhu et al., 2020) proposed a Fact-Aware Summarizer (FASum) and a Fact Corrector model.",
"In Table 4 of their paper, the FASum achieves significantly lower ROUGE scores (30.28/10.03/23.76 and 40.53/17.84/37.4 for ROUGE-1/2/L on XSUM and CNNDM respectively).",
"This indicates a significant gap in the summary quality.",
"Even their best result, which is using Fact Corrector on UniLM (Dong et al., 2019), achieves lower ROUGE scores than BART-large MLE.",
"Although the authors in (Zhu et al., 2020) used FactCC as an evaluation metric, they did not use the official method to train FactCC; they used the ground truth summaries rather than sampled sentences from the input documents as positive examples.",
"As a result, we are not able to compare the FactCC numbers reported in (Zhu et al., 2020).",
"Nevertheless, we can observe that there is little or no improvements for Fact Corrector on UniLM according to FactCC.",
"We believe that this is because the recent large transformer-based, pre-trained seq2seq models such as UniLM and BART have significantly improved the summarization quality and it is much more challenging to improve even the factual consistency of these state-of-the-art models.",
"In comparison, our results reported in Table 2 and Table 3 represent significant improvements.",
"The authors in (Cao et al., 2018) only experimented on Metrics Factual Informative Grammatical better worse equal better worse equal better worse equal XSUM 18 9 73 22 9 69 4 2 94 CNNDM 18 7 75 42 22 36 5 6 89 Table 4: Human evaluation results on summaries generated by QUALS-CONSEQ in comparison to the BART-large MLE baseline for 100 randomly selected examples from the test sets of XSUM and CNNDM.",
"the Gigaword corpus (Rush et al., 2015) and did not release their code so we were unable to compare to their method.",
"However, given the recent progress in transformer-based seq2seq models, it is likely that our BART-large MLE baseline outperforms their RNN-based models.",
"Again, we believe that it is much easier to improve factual consistency of a weak seq2seq model than that of a strong model (such as UniLM or BART-large) as shown in (Zhu et al., 2020).",
"Human evaluation: We use Amazon SageMaker Ground Truth 5 to conduct human evaluation.",
"We sample 100 examples from the test set of XSUM and CNNDM, respectively.",
"In each task, we present an input document, together with the generated summaries from the BART-large MLE and QUALS-CONSEQ models.",
"We ask the annotators to select which of the two summaries they prefer along 3 dimensions: factual consistency, informativeness and grammatical correctness.",
"For each of these dimensions they can also choose Equal if they feel that both summaries are of similar quality.",
"Our annotators consist of 10 data associates who are native English speakers whose background includes training in linguistic annotation.",
"Each task is performed by 3 different annotators and we take the majority vote.",
"We provide the detailed setup and instructions in the Appendix.",
"The result of human evaluation is reported in Table 4, showing the percentage of examples along these three dimensions.",
"In both datasets, we observe that QUALS-CONSEQ clearly improves the factual consistency of the generated summaries compared to the BART-large MLE baseline.",
"We notice that the improvement in informativeness is even greater.",
"Fleiss's Kappa (Fleiss et al., 1971) shows fair agreement for factual consistency, informativeness and grammatical correctness 5 https://aws.amazon.com/sagemaker/ groundtruth/ Input 1: Keates made over 150 league appearances for Wrexham and captained the club to an FA Trophy win in 2013.",
"choices (0.136/0.270/0.043 for XSUM and 0.237/0.202/0.206 for CNNDM).",
"We note, however, that most disagreements occur when one annotator rates two summaries as equal and another rates one of the two as either better or worse.",
"To measure this, we computed Fleiss's Kappa again, counting equal and either better or worse as equivalent (and better and worse as not equivalent).",
"Here, our agreement is almost perfect (0.837/0.839/0.975 for XSUM and 0.945/0.816/0.967 for CNNDM).",
"We thus see that annotators rarely directly contradict each other on rating one summary above or below another, but often have a hard time deciding when the two summaries are equal.",
"We analyzed the human evaluation results and found several types of improvements/errors produced by QUALS-CONSEQ .",
"Our model is able to rectify factual errors found in MLE such as 1) entity hallucination and errors (Example 1 and 2 in Table 5) and 2) relations and co-reference (see Table 1 and Example 3 in Table 5).",
"QUALS-CONSEQ also made mistakes in cases where it was not sensitive to certain modifier phrases (extra more than in Example 2 in Table 5).",
"More examples of generated summaries and q-a pairs are in the Appendix.",
"Illustration of QUALS : We take an example to illustrate how QUALS captures the factual inconsistency of summaries.",
"The BART-large MLE model generate a summary: The AirAsia flight 4U 9525 crash was the latest in a series of tragedies that have hit the aviation industry.",
"The input document described the AirAsia crash but did not mention the flight number.",
"In fact, 4U 9525 is the Germanwings flight that crashed in the French Alps.",
"The model hallucinated the flight number because it appeared in several training examples that cover the Germanwings crash.",
"Given the above summary, our QAGen model generates the following q-a pairs: Q1: What was the name of the flight that crashed?",
"A1: 4U 9525.",
"Q2: Which airlines flight crashed?",
"A2: AirAsia .",
"In Figure 5 in the Appendix we show the negative log likelihood per subword token on these q-a pairs conditioned on the summary (blue) and input document (orange).",
"The answer to the first question is very likely according to the summary while extremely unlikely according to the input document, indicating factual inconsistency.",
"On the other hand, AirAsia is factually consistent and the second q-a pair is likely according to the input document.",
"The QUALS score for the two q-a pairs are 2 .",
"615 and 0 .",
"054 , respectively.",
"Several authors have pointed out the problem of factual inconsistency in abstractive summarization models (Kryscinski et al., 2019; Cao et al., 2018; Durmus et al., 2020).",
"Besides QAGS (Wang et al., 2020) and FactCC (Kryscinski et al., 2019), another possible approach to quantify factual consistency is to rely on Open Information Extraction (OpenIE) and dependency parsing tools to identify and match the relations in an input document and its summary (Cao et al., 2018; Zhu et al., 2020).",
"However, the underlying OpenIE tools are often not accurate enough to be used for this purpose.",
"Our proposed CONSEQ algorithm is related to the unlikelihood training (Welleck et al., 2019; Li et al., 2019) as both have positive and negative loss terms.",
"The key difference is that in unlikelihood training, the negative loss serves as a regularization term, weighted by a hyperparameter , in addition to the regular MLE training.",
"In contrast, our CONSEQ is motivated from the REINFORCE algorithm and treats the positive and negative terms equally.",
"Furthermore, while the unlikelihood training uses all the ground truth sequences equally in the regular MLE (positive) loss term, we construct the positive and negative sets by incorporating the reward function (e.g. QUALS ) as discussed in Sec. 3.",
"In another related work, factual consistency metrics at the entity level have been proposed (Nan et al., 2021).",
"The authors also investigated several techniques such as data cleaning, multitask learning and entity-augmented decoding to improve entity level factual consistency scores of abstractive summarization models.",
"In contrast, the QUALS metric that we propose is more general, not limited to entities.",
"Another recent work tackles the hallucination problem in abstractive text summarization via post processing on the generated summary (Chen et al., 2021).",
"Specifically, entities of the generated summaries are swapped with other named entities of the same type found in the original document to form a set of candidate summaries.",
"The final summary is determined by a ranking model trained to prefer the factually consistent summaries.",
"In this paper we proposed to improve the factual consistency of abstractive summarization models.",
"We first proposed an efficient evaluation protocol called QUALS to measure factual consistency.",
"We then proposed a contrastive learning algorithm for seq2seq models called CONSEQ to maximize QUALS during training.",
"We demonstrated that our proposed method significantly improves the factual consistency of the current state-of-the-art summarization model measured by automatic metrics as well as side-by-side human evaluation.",
"In addition to improving factual consistency of summarization models, we believe that the CONSEQ algorithm can have a wider impact on training seq2seq models in general to incorporate non-differentiable evaluation metrics into model training."
] | [
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"other",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"objective",
"result"
] |
[
"Lexically-constrained sequence decoding allows for explicit positive or negative phrase-based constraints to be placed on target output strings in generation tasks such as machine translation or monolingual text rewriting.",
"We describe vectorized dynamic beam allocation, which extends work in lexically-constrained decoding to work with batching, leading to a five-fold improvement in throughput when working with positive constraints.",
"Faster decoding enables faster exploration of constraint strategies: we illustrate this via data augmentation experiments with a monolingual rewriter applied to the tasks of natural language inference, question answering and machine translation, showing improvements in all three.",
"For many natural language generation tasks, we often know word(s) that should (or should not) be in the output sentence.",
"Examples include terminology databases in Machine Translation (MT) (Hokamp and Liu, 2017), names (and generic responses) in dialogue generation (Li et al., 2016; Gu et al., 2016), objects in image captioning (An-derson et al., 2017), and facts in abstractive summarization (See et al., 2017).",
"One approach to enforce hard lexical constraints in the output is to modify the inference procedure to enforce their presence directly (Hokamp and Liu, 2017).",
"These constraints could be either positive (a word must appear in the output) or negative (a word must be avoided).",
"While negative constraints could be easily enforced by preventing hypotheses with prohibited tokens from entering the beam, placing positive constraints in natural and meaningful ways is less straightforward.",
"We improve upon previous work by vectorizing the dynamic beam allocation (DBA) algorithm from Post and Vilar (2018) and by incorporating multi-state tries, which track a subset of nodes at each decoding timestep.",
"These improvements lead to a five-fold speedup in decoding with positive constraints and in some cases better constraint placements (with respect to BLEU).",
"Post and Vilar (2018) motivated the utility of lexically-constrained decoding in MT for scenarios such as interactive translation and domain adaptation.",
"Translation applications handling large amounts of data will clearly benefit from improvements in speed: the same is true for large-scale data augmentation via rewriting.",
"In this case, a practitioner will ideally explore various task-specific rewriting strategies that may lead to improvements as observed during development, and then incorporate the best strategy into a test-final model.",
"Recently, sentential paraphrasing gained the ability to enforce lexical constraints (Hu et al., 2019), but constrained decoding was still too inefficient to be practical (Hokamp and Liu, 2017) at a large scale.",
"Even with the approach described by Post and Vilar, exploring the space of possible rewriting strategies on a task-specific basis may be overly time consuming: our performance improvements to their algorithm lowers the barrier of entry, where one may more practically experiment with various strategies during development.",
"To illustrate our point, we build an improved monolingual sentential rewriter that can be conditioned on arbitrary positive and negative lexical constraints and use this to augment data for three external NLP tasks with different strategies: Natural Language Inference (NLI), Question Answering (QA) and MT. Our main contributions are: A more efficient and robust approach to lexically-constrained decoding with vectorized DBA and trie representations; A trained and freely available lexically-constrained monolingual rewriter 1 with improvements in both human-judged semantic similarity and fluency over the initial PARABANK rewriter (Hu et al., 2019); Monolingual rewriting constraint heuristics for automatic data augmentation leading to improvements on NLI / QA / MT. 2 Background Constrained decoding Prior work explored methods to apply lexical constraints to a Neural Machine Translation (NMT) decoder (Hokamp and Liu, 2017; Anderson et al., 2017).",
"However, most of these methods are slow and impractical as they change beam sizes at different time steps, which breaks the optimized computation graph.",
"Post and Vilar (2018) proposed a means of dynamically allocating the slots in a fixed-size beam to ensure that even progress was made in meeting an arbitrary number of constraints provided with the input sentence.",
"However, despite it being their motivation, their approach did not scale to batching, instead they sequentially processed constraints for sentences within the batch.",
"Paraphrases and Rewriting Many works sought to create paraphrases or paraphrastic expressions through existing corpora.",
"For example, DIRT (Lin and Pantel, 2001) extracts paraphrastic expressions from paths in dependency trees.",
"Weisman et al. (2012) explored learning inference relations between verbs in broader scopes (document or corpus level).",
"PPDB (Ganitkevitch et al., 2013) constructs paraphrase pairs by linking words or phrases that share the same translation in another language.",
"PARANMT (Wieting and Gimpel, 2018) and PARABANK (Hu et al., 2019) used back-translation to build a large paraphrase collection from bilingual corpora.",
"For arbitrary sentence rewriting, Napoles et al. (2016) used statistical machine translation in tandem with PPDB as a black box monolingual sentential rewriter.",
"Mallinson et al. (2017) used a series of NMT model pairs to perform back-translations for monolingual paraphrasing.",
"A similar approach was adopted by PARANMT to create a large paraphrase collection, which is used to train a monolingual sentence rewriter for canonicalization.",
"PARABANK (Hu et al., 2019) extends 1 http://nlp.jhu.edu/parabank PARA NMT's approach and produced a NMT-based rewriter with the ability to apply lexical constraints to produce multiple paraphrases.",
"However, Hu et al. (2019) did not: evaluate the rewriter's performance on in-the-wild sentences; explore more sophisticated versions of the rewriter; nor demonstrate its utility on NLP tasks.",
"Data augmentation Data augmentation has been used to improve performance and robustness in deep neural models.",
"In NMT, the most common approach is back-translation, where monolingual text in the target language is translated to create synthetic source sentences (Sennrich et al., 2016).",
"Variants of back-translation target specific words with high prediction loss (Fadaee and Monz, 2018), employed sampling to increase diversity (Edunov et al., 2018), replace rare words (Fadaee et al., 2017), or replace at random (Wang et al., 2018).",
"Automatic data generation has also been successfully used for community question answering (Chen and Bunescu, 2017), semantic parsing (Jia and Liang, 2016), and task-oriented dialogue (Hou et al., 2018) by generating new data from the training dataset.",
"In contrast, our model is trained on a much larger external corpus and is fixed, independent of the task.",
"Kobayashi (2018) utilized a pre-trained language model for automated data augmentation, though they only consider word-level rewrites and encourage label-preservation, while we paraphrase whole sentences with lexical constraints, independent of a gold label.",
"Most similar to our experiments, Iyyer et al. (2018) explored syntactic paraphrasing for augmentation in sentiment and NLI tasks, extending prior work on PARANMT.",
"Lexically-constrained decoding is a modification to beam search that yields decoder outputs honoring user-supplied constraints.",
"These constraints can be provided in the form of: positive constraints , which specify that certain tokens or token sequences must be present in the output; or negative constraints , which specify token or token sequences that must not be generated.",
"Take positive constraints for example, in translating the sentence Das stimmt einfach nicht to English, the user can specify the constraint not the case to (pre-sumably) get the output That's just not the case instead of model-preferred output That's just not a s m a ll b i r d s m a ll ca t c 3 c 4 a ho r s e a c o w met: constraints: c 1 c 2 ?",
"true .",
"While there is no guarantee that the decoder will use the constraints in a sensible way, constraints are often well-placed empirically.",
"The implementation of positively constrained decoding comprises two key pieces: tracking which of the supplied constraints each hypothesis has already generated, and ensuring progress through the constraints by dynamically allocating the beam to hypotheses that have generated different numbers of them.",
"We describe an improvement to each of these over Post and Vilar (2018), which includes: (1) a vectorized approach to beam allocation that works with batch decoding; and (2) the use of tries for recording constraint state, and thereby offsetting certain corner cases.",
"These contributions allow the decoder to find much better placement of constraints (as evidenced by an almost 2 point BLEU score increase) and to increase throughput for batch decoding.",
"Here, we assume the reader is familiar with beam decoding for NMT, the details of which are provided by Post and Vilar (2018).",
"The implementation by Post and Vilar used a flat one-dimension array listing the word indexes of all positive constraints (duplicates allowed).",
"A parallel array was used to mark which words in this list were non-final words in a sequence, so that progress could be tracked through sequences of tokens.",
"Progress through the constraints was tracked by maintaining, for each slot in the beam, a third array, which marked which of the constraints had already been generated by that hypothesis.",
"The first case occurs when two constraints have an identical prefix.",
"Consider the constraints in Fig.",
"1(a) when translating the French sentence une vache et un cheval .",
"The array-based implementation has to choose which constraint to generate when it has only generated the first word of the English translation, a cow and a horse .",
"Suppose it chooses constraint c 1 , a horse .",
"If the subsequent step generates cow instead, the constraint tracking for the phrase a horse will be marked as incomplete and reset, and the decoder will not realize that it has satisfied a different constraint.",
"A second corner case arises when a constraint c 4 is a non-prefix substring of a constraint c 3 .",
"In this situation, the decoder may begin generating the longer constraint, only to generate the shorter one, without realizing it.",
"For example, consider a target sentence that should be a small cat saw a small bird , with constraints a small bird and small cat (Figure 1b).",
"When generating the first word, a , the decoder begins tracking c 3 .",
"It continues by adding to this hypothesis the second word, small .",
"However, suppose it then extends this hypothesis with cat .",
"It will abort tracking of c 3 , and not realize that it completed c 4 .",
"A more natural representation that addresses these corner cases is to organize constraints that haven't yet been generated into a trie.",
"Nodes in the trie that represent the ends of constraints are augmented with a counter that indicates how many times that constraint must be generated.",
"2 Each time a constraint is completed, the number is decremented, and nodes of the trie can be trimmed when they lead only to paths ending in zero counts.",
"2 Because one constraint can be a subsequence of another, some interior nodes will also have these counts.",
"we track multiple states in each constraint trie.",
"In summary, we represent all the constraints as a compact trie.",
"Each hypothesis in the decoder beam has its version of the trie.",
"The set of active states in each hypothesis' trie tracks all suffixes of the target words that match against the constraint trie.",
"When a constraint is generated, its counter is decremented and zero paths are pruned.",
"Negative constraints are used to denote words and phrases that the decoder must not generate.",
"Blocking single-word negative constraints can be done by setting their costs to infinity before doing topk selection, at each time step.",
"These negative constraints are also represented in a trie, although it is slightly different, because it does not have a counter and never needs to be pruned.",
"Instead, it records at each node the list of word IDs that end phrases for which the current word is the penultimate.",
"We similarly track all suffixes of the current hypothesis' target word string that match the negative constraint trie.",
"At each time-step, we block the generation of active phrases by setting to infinity all word IDs marked in the current node (if any).",
"This includes the root node, which handles single-word constraints.",
"Each state is then extended by following outgoing arcs, if present, or else resetting them to the root state.",
"Post and Vilar (2018) describe an algorithm that divides the beam among hypotheses that have generated different numbers of positive constraints.",
"For a beam of size k and with C positive constraint tokens, the algorithm produces a set of candidate extensions of the k hypotheses from the beam.",
"They assemble these extensions from three sources:",
"(a) the topk best-scoring tokens across all hypotheses in the beam (without respect to con-straints);",
"(b) the set of tokens that advance the constraint trie for each hypothesis; and",
"(c) the best-scoring extension of each hypothesis.",
"3 After constructing this candidate list, they whittle it down to a list of size k and use it to construct the beam at the next time step.",
"This way, the algorithm ensures that portions of the beam are devoted to candidates having met different number of constraints, and thereby that progress is made towards meeting all the constraints as decoding proceeds.",
"However, their implementation used a procedural approach which is incompatible with batching; that is, constraints for input segments within a batch are processed sequentially, so increasing the batch size does not produce any speed gains.",
"We replace the procedural approach with a vectorized one, which uses GPU operations to quickly assemble the list of candidates and allocate them to the beam such that we do benefit from batching.",
"A sketch of our algorithm follows.",
"We assemble candidates from the same three sources described above.",
"Sets",
"(a) and",
"(c) already use fast GPU operations.",
"These operations can be done efficiently even batch-wise.",
"Set",
"(b) is less amenable to vectorization, but can be assembled by querying each hypothesis for its unmet constraints.",
"We now use a sorting-based algorithm to parallelize the divvying 3 The difference between",
"(a) and",
"(c) is that the items constituting",
"(a) typically come from different extensions of the top hypotheses, whereas",
"(c) ensures that one extension of each hypothesis is in the candidates list.",
"numbers of constraints.",
"We do this by assembling a matrix with all the candidates for all sentences in the batch ( Fig. 3).",
"This matrix contains a column for each candidate, including the sentence number, the number of unmet constraints for that hypothesis, its sequence score, the hypothesis it extends, and the next vocabulary ID.",
"With this matrix, we can quickly select the k hypothesis extensions for the next timestep using a multi-key sort.",
"The first key is the sentence number.",
"Next, it is the number of unmet constraints in each hypothesis.",
"We then make use of a \"step\" row, which assigns increasing indices within each group of hypotheses with the same number of unmet constraints.",
"Sorting on this row as the third key establishes a round-robin assignment of the k -sized beam to items having met different numbers of constraints.",
"In the end, we select the top k items (in the example, k = 7 and the selected columns are in gray).",
"We use SOCKEYE (Hieber et al., 2017) 4 for our evaluations.",
"We trained a 6-layer GermanEnglish Transformer using the default settings on the WMT'18 training data and the newstest2018 test set for evaluation (Bojar et al., 2018).",
"Following Post and Vilar (2018), we compare decoding results in an unconstrained setting and with two sets of positive constraints: rand3, which selects 3 random words from the reference, and phr4, which selects a single 4-word phrase.",
"We report decoding speed (in sentences per second) and BLEU score (Papineni et al., 2002), as measured by SacreBLEU (Post, 2018).",
"The results are 4 https://github.com/awslabs/sockeye/ shown in Table 1. Our approach is faster than existing approaches when decoding with positive constraints and produces the same or higher BLEU scores, which we take as a sign of more fluent and natural hypotheses under constraints.",
"Without batching, there is no speedup, but at a batch size of 20, we see roughly a 5 speedup.",
"Inspired by the approach described in PARABANK (Hu et al., 2019), we trained a more powerful English monolingual rewriter by using a multi-head self-attention NMT model, Transformer (Vaswani et al., 2017).",
"We used a 6-layer encoder and decoder with a model size of 512 and 8 attention heads.",
"The encoder and decoder embeddings share the same weight.",
"Unlike PARABANK , which trained a rewriter on a subset of 50M paraphrase pairs out of its collection, we trained on all of the paraphrastic pairs in PARABANK originated from CzEng 5 that: (1) have a regression score over 0.50; (2) only consist of ASCII characters after punctuation normalization; and (3) have a reference/paraphrase token Jaccard index between 0.25 and 0.65.",
"We retain 141,381,887 paraphrastic pairs, out of over 220 million, as training data after applying these filters.",
"To ensure output quality, we only use back-translated paraphrases as source.",
"PARABANK is a real-cased resource.",
"We mark all words that have first-character capitalization and convert them to lowercase.",
"The marking is 5 PARABANK generated paraphrases from two large bilingual corpora, CzEng (Bojar et al., 2016a) and GigaFrEn (Callison-Burch et al., 2009).",
"We picked paraphrases from only CzEng, the larger one of the two.",
"We learn a shared byte-pair encoding (BPE) over the entire training data with 30,000 BPE operations (Sennrich et al., 2016), keeping all vocabulary items with a frequency over 50 in the post-BPE data.",
"We follow Sennrich and Haddow (2016) and use BIOE\" tagging to annotate BPE segmentation and broadcast the casing factor accordingly. The encoder uses both source factors. The model is trained on 2 NVIDIA GTX 1080Ti's until convergence (5 days). Rewriter Evaluation We randomly sampled 100 instances from both MNLI matched and mismatched development set. Each instance consists of 4 sentences: premise, entailed, contradicting, and neutral. We use the following 3 different rewriters to rewrite all 800 sentences: (1) an LSTM-based rewriter trained on PARABANK alpha; following (Hu et al., 2019); (2) a Transformer-based rewriter trained on PARABANK alpha; and (3) a Transformer-based rewriter trained on full PARABANK with the filters and improvements described here. Inspired by the interface of EASL (Sakaguchi and Van Durme, 2018), we ask crowd-workers to give each paraphrase a score between 0 and 100 depending its semantically similarity to the original, reference sentence. Independently, we provide options for flagging ungrammatical or nonsensical sentences. Paraphrases are judged by up to 3 different workers, with 11 workers participating. We randomly include an attention check consisting of reference sentence itself. 6 The result is shown in Table 2. Switching the rewriter architecture from LSTM to Transformer improves the human-judged semantic similarity by 5.1% and fluency by 6.5%. The improvements described here leads to a gain of 9.6% in semantic similarity and 10.2% in fluency overall. This improved Transformer-based rewriter is subsequently used for data augmentation. 5 Paraphrastic Data Augmentation We demonstrate the utility of our improved lexically-constrained decoding via data augmentation with some simple rewriting heuristics and two augmentation strategies. First, the model could be 6 Only workers who pass the test at least 90% of the time and contribute at least 9 judgments are included in the result. Similarity STD Fluency LSTM alpha 74.5 25.0 80.7% Transf. alpha 78.3 22.9 87.2% Transf. Full 81.7 20.9 90.9% Table 2: Comparison between three monolingual rewriting systems. Systems will alpha\" are trained on PARABANK alpha, which the other one is trained on the full data. Similarity is the mean human-judged semantic similarity score; the higher the better. STD described the standard deviation of similarity. Fluency is the percentage of paraphrases judged to be both grammatical and meaningful. trained on the augmented (training) data. Orthogonally, predictions can be made on the all of the augmented (evaluation) data, which can then be aggregated . We show experimental results on natural language inference (NLI, Section 5.1), question answering (QA, Section 5.2), and NMT (Sec-tion 5.3) tasks. These results are merely indicative of the potential in data augmentation via constrained paraphrasing, and are by no means a thorough investigation of strategies that yield the best improvements. Such an investigation, however, could be enabled by our algorithmic improvements and practitioners' domain expertise. 5.1 Natural Language Inference Natural language inference is the task of determining entailment. Two sentences, a premise p and a hypothesis h , are labelled with ENTAILMENT , CONTRADICTION , or NEUTRAL depending on whether p logically entails, contradicts, or does not interact with h . MultiNLI (MNLI) (Williams et al., 2018) is a large, multi-genre dataset for natural language inference. The dataset is also divided into matched and mismatched portions based on whether the source of the evaluation data matches the source of the training set. Recent models rely on contextual sentence encoders pre-trained on vast amounts of English monolingual text (Peters et al., 2018; Devlin et al., 2018). We train and evaluate a model on MNLI, and find that data augmentation leads to improvements exceeding and complementary to those by ELMo, possibly due to improved lexical diversity during training and at inference. Model We use the model described in Bowman et al. (2019) 7 with the default parameters. They train a sentence representation model (possibly on top of ELMo) on the MNLI training set and subsequently train a clean task-specific model for each task (for this model, MNLI again). The task-specific MNLI model roughly follows BiDAF (Seo et al., 2016), followed by an MLP. We also train a model without the ELMo contextual layers to compare contextual sentence representations against data augmentation. Since there is minor variance between different random seeds, 8 we train each model twice and evaluate the best-performing model on the development set. Paraphrase Generation We generate paraphrases for our data augmentation experiments by negatively constraining on the most content-bearing token of each input sequence, as determined by inverse document frequency (IDF). For a given input sequence s we calculate the IDF of each token t i s as log | D | | d D : t i d | where D is the set of all English sentences in the train set. This relatively simple lexical constraint tends to force the decoder to rewrite the input sequence using different (but semantically related) words while maintaining fluency. In practice, we observed an average unigram precision of 67.6%; i.e., 32.4% of tokens in paraphrases were not contained in their corresponding inputs. Additional results using a positively constrained rewriter in Appendix A. Data Setup We first rewrite all premises P to P (cid:48) and all hypotheses H to H (cid:48) , then for each p i P, h i H , we include all four examples, ( p i , h i ) , ( p i , h (cid:48) i ) , ( p (cid:48) i , h i ) , and ( p (cid:48) i , h (cid:48) i ) into the training set, always preserving the original corresponding gold label. We include two copies of the original dataset in training to increase its weight. The original MNLI dataset contains 393K training pairs, and 20K in each dev and test, while the augmented dataset consists of 1.96M training pairs, and 79K in dev and test. At test time, we rewrite the test sentence pairs. A trained model can also make predictions on each of three rewritten sentence pairs. Together with the original prediction, these four can then be aggregated by assigning weights to each prediction source. In our experiments, we perform this 7 https://github.com/ jsalt18-sentence-repl/jiant 8 Bowman et al. (2019) found the variance to be 0.2 Dev. Test. (m/mm) Baseline 74.8 74.7 (74.8/74.6) +Agg. 75.0 74.9 (74.9/74.8) +Train 75.4 75.2 (75.0/75.3) +Train+Agg. 75.6 75.4 (75.1/75.7) +ELMo 75.8 75.0 (75.1/75.0) +Agg. 75.9 75.2 (75.3/75.1) +Train 76.4 75.6 (75.6/75.6) +Train+Agg. 76.7 75.8 (75.9/75.7) Table 3: F1 scores on MNLI. +Train denotes training on augmented data; +Agg. denotes using a weighted aggregation. Scores on the development set are a weighted average between the matched (m) and mismatched (mm) portions of the dataset, while the test set scores are additionally broken down into each category. weighted aggregation (+Agg) for each model, tuning on the development set (Appendix B). Experimental Results We find in Table 3 that data augmentation helps during training and inference. Not only are the total gains from augmentation comparable to those from ELMo, they are apparent even in the presence of the contextual sentence encoder. This suggests that the gains from data augmentation through rewrites complement recent gains from contextual sentence encoders. The fairest external comparison is with Bowman et al. (2019), as our model is identical. Their best models achieve 76.2 F1 on development and 75.4 F1 on test. On the development set, they see a gain of 0.6 points by using multi-task training and external datasets. On that set, we report a total gain of up to 0.9 points purely through data augmentation. With respect to absolute test set scores, our best model outperforms theirs by 0.4, showing that rewriter-based data augmentation can be a powerful method for NLP tasks. Analysis NLI systems have been shown to be brittle when the input is perturbed (Alzantot et al., 2018). Even when the premise or hypothesis is changed in a way that preserves the entailment semantics, the NLI system may make an incorrect prediction where it was previously correct. We present evidence showing that data augmentation for NLI reduces the brittleness of our model. To demonstrate the brittleness of the baseline models, we analyze how predictions change. The model trained on the original un-augmented P (cid:48) , H P, H (cid:48) P (cid:48) , H (cid:48) + Agg. No change 88.51 84.23 81.95 96.20 + 4.23 5.33 6.08 1.75 + 5.84 8.44 9.67 1.49 1 2 1.42 2.00 2.30 0.57 No change 88.20 83.26 80.68 96.00 + 4.03 5.45 6.11 1.80 + 6.42 9.22 10.78 1.72 1 2 1.35 2.08 2.42 0.47 Table 4: Percentage of changed predictions on the MNLI development set using the baseline model without (top) and with (bottom) ELMo. + (correct after rewrite), + (originally correct), and 1 2 (different incorrect) are changes after rewriting. + Agg. denotes the predictions after weighted aggregation. dataset is evaluated on the original development set and each of the rewritten development sets, and we investigate the differences. Table 4 shows how often original predictions are different from the corresponding predictions on the rewritten development sets; predictions can be (1) unchanged, (2) newly correct, (3) newly incorrect, or (4) changed but still incorrect, while Figure 4 shows how even relatively modest, semantically valid paraphrases can cause the NLI model change incorrectly. Given a perfect rewriter that always generates semantically equivalent paraphrases and a perfect NLI model robust to perturbations, we would expect no change in predictions between the original development set and the rewritten ones. However, this is not what we observe; Table 4 shows that rewriting leads to a greater percentage of newly incorrect predictions than newly correct predictions. We believe that the higher percentage of newly incorrect predictions on the rewritten development sets demonstrates the brittleness of the NLI system rather than semantic dissimilarity that may be introduced by the rewriter. We note that the aggregated predictions shows the opposite pattern: we see a higher percentage of newly correct predictions than incorrect ones. If the paraphrases were largely semantically dissimilar we would not expect any gain by combining predictions. Given both the numerical boost seen by aggregation and the above examples, we hypothesize that the rewriter does not frequently change entailment semantics. Because the semantics remain similar, and because the paraphrases were gener-P : Visit at sundown or out of season to get the full flavor of the setting H : The setting is better to visit at sundown or during low season H (cid:48) : It is better to visit at sunset or during low season Gold: Entailed P, H : Predict Entailed P, H (cid:48) : Predict Neutral P : I had rejected it as absurd , nevertheless it persisted H : It persisted even after I rejected it as an absurdity H (cid:48) : It went on even after I turned it down as an absurdity Gold: Entailed P, H : Predict Entailed P, H (cid:48) : Predict Contradiction Figure 4: Cases where the baseline system changes its prediction on rewritten examples. ated with constraints designed to introduce lexical diversity, we believe that the label-preserving data augmentation improves the NLI model by making it more tolerant of minor lexical differences, better able to generalize, and less inclined to memorize. 5.2 Question Answering We apply our paraphrastic rewriter to the task of question answer sentence selection to see if augmenting with paraphrases leads to improvements. The task is defined as follows: Given a question q and a set of candidate sentences { c i } , select the candidates which answer q . Model We adapt a popular neural architecture for NLI, InferSent (Conneau et al., 2017), to our QA sentence selection task. In InferSent, the questions and answers (originally the premises and hypotheses) are embedded using an uncontextual-ized word embedding (e.g. GloVe), which we also experiment with ELMo (Peters et al., 2018) to incorporate recent advancements in large-scale contextualized pre-training. Bidirectional LSTMs (Graves and Schmidhuber, 2005) are run atop of these contextualized embeddings and a max-pooling layer is used to generate a feature vector for both the question and the answer. Following various matching methods (Mou et al., 2016) and a multi-layer feed-forward neural network, the model produces a final score. We train the system following the method proposed by Rao et al. (2016), utilizing a ranking loss (Weston and Watkins, 1999) that contrasts positive answers against negative ones. Paraphrase Generation We augment each answer candidate sentence with exactly 1 paraphrase in the dataset using the following heuristics: (1) named entities shared between a specific answer and its corresponding question are retained as positive constraints; (2) correct answer spans are retained as positive constraints; (3) words with the topk IDFs (inverse document frequencies; hence important words) that are not positive constraints are selected as negative constraints to promote the lexical diversity of the paraphrases.",
"9 Data Setup We augment the raw TREC-QA dataset (Wang et al., 2007) under the following orthogonal strategies: (1) augmenting the training set with the paraphrases generated via the approach described above; (2) augmenting the answer candidates at evaluation time, and choosing the max score among the paraphrases as the score (aggregation by voting ).",
"Experimental Results We evaluate our models using average precision (MAP) and mean reciprocal rank (MRR).",
"Model selection is done with early stopping to choose the epoch with the maximum MAP score.",
"Note that the Baseline (+ELMo) settings below falls back to the standard QA selection task, and our score under ELMo is comparable to earlier state-of-the-art results, e.g. by Rao et al. (2016).",
"It is shown that augmenting at evaluation time (aggregation by voting) result in stable improvement (around +2~3% MAP and +2~6% MRR for both scenarios that either augments the training data or not)this shows that increasing the paraphrastic diversity of the answer candidates could 9 Stopwords and tokens with non-letter characters (e.g. with , 42 , n't ) are excluded.",
"k { 2 , 3 , 4 } is a hyperparameter we tune we found out that generally k = 2 works the best.",
"potentially make the system more robust.",
"However augmenting the training set does not yield such improvementswe speculate that this may introduce some noise to the training data.",
"We apply our paraphrastic rewriter to the WMT 2016 Turkish-English translation task (Bojar et al., 2016b).",
"We see no improvement in English to Turkish translation, but see a 1 .",
"1 BLEU improvement when training an initial NMT system on half paraphrased and half original data, and continued training on the original data.",
"Full details of the experiments are in Appendix C. This was the highest concentration of standard data we experimented with, and future work will explore additional ways of data augmentation using paraphrases.",
"Lexically-constrained sequence decoding provides control over whether certain tokens or token sequences appear in the output.",
"Motivated by applications such as large-scale MT, we improved the speed for constrained decoding significantly by proposing a vectorized dynamic beam allocation algorithm.",
"We also added multi-state trie representations for robustness to corner cases.",
"Also reliant on the efficiency of constrained decoding is data augmentation via rewriting, where one might need to explore a variety of strategies with task-specific constraints on development data.",
"We trained an improved monolingual sentential rewriter and used it to rewrite data for NLP tasks.",
"We experimented with augmenting training data, aggregating predictions on rewritten test data, and both.",
"Using a few simple constraint heuristics, we showed improvements additive to ELMo in NLI and QA, and in MT. The rewriter, along with the augmented data files, can be found at http://nlp.jhu.edu/parabank .",
"We hope this will enable future exploration of augmentation strategies for a variety of NLP tasks.",
"Thanks to Michael Denkowski, who first suggested using a trie to represent constraints in a group discussion.",
"This research was supported in part by DARPA AIDA and DARPA LORELEI."
] | [
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"objective",
"other",
"abstain",
"other",
"other"
] |
[
"Sentence ordering is the task of arranging the sentences of a given text in the correct order.",
"Recent work using deep neural networks for this task has framed it as a sequence prediction problem.",
"In this paper, we propose a new framing of this task as a constraint solving problem and introduce a new technique to solve it.",
"Additionally, we propose a human evaluation for this task.",
"The results on both automatic and human metrics across four different datasets show that this new technique is better at capturing coherence in documents.",
"Sentence ordering is the task of arranging sentences into an order which maximizes the coherence of the text (Barzilay and Lapata, 2008).",
"This is important in applications where we have to determine the sequence of pre-selected set of information to be presented.",
"This task has been well-studied in the community due to its significance in down stream applications such as ordering of: concepts in concept-to-text generation (Konstas and Lapata, 2012), information from each document in multi-document summarization (Barzilay and Elhadad, 2002; Nal-lapati et al., 2017), events in storytelling (Fan et al., 2019; Hu et al., 2019), cooking steps in recipe generation (Chandu et al., 2019), and positioning of new information in existing summaries for update summarization (Prabhumoye et al., 2019).",
"Student essays are evaluated based on how coherent and well structured they are.",
"Hence, automated essay scoring (Burstein et al., 2010; Miltsakaki and Kukich, 2004) can use this task to improve the efficiency of their systems.",
"Early work on coherence modeling and sentence ordering task uses probabilistic transition model based on vectors of linguistic features (Lapata, 2003), content model which represents topics as states in an HMM (Barzilay and Lee, 2004), and entity based approach (Barzilay and Lapata, 2008).",
"Recent work uses neural approaches to model coherence and to solve sentence ordering task.",
"Li and Hovy (2014) introduced a neural model based on distributional sentence representations using recurrent or recursive neural networks and avoided the need of feature engineering for this task.",
"In (Li and Jurafsky, 2017), they extend it to domain independent neural models for coherence and they introduce new latent variable Markovian generative models to capture sentence dependencies.",
"These models used windows of sentences as context to predict sentence pair orderings.",
"Gong et al. (2016) proposed end-to-end neural architecture for sentence ordering task which uses pointer networks to utilize the contextual information in the entire piece of text.",
"Recently hierarchical architectures have been proposed for this task.",
"In (Logeswaran et al., 2018), the model uses two levels of LSTMs to first get the encoding of the sentence and then get the encoding of the entire paragraph.",
"Cui et al. (2018) use a transformer network for the paragraph encoder to allow for reliable paragraph encoding.",
"Prior work (Lo-geswaran et al., 2018; Cui et al., 2018; Kumar et al., 2020) has treated this task as a sequence prediction task where the order of the sentences is predicted as a sequence.",
"The decoder is initialized by the document representation and it outputs the index of sentences in sequential order.",
"Only in (Chen et al., 2016), this task is framed as a ranking problem.",
"In this work, a pairwise score is calculated between two sentences and then the final score for an order is obtained by summing over all the scores between pairs of sentences.",
"The order which has the maximum score is given as output.",
"Instead of considering all possible permutations of a given order, it uses beam-search strategy to find a suboptimal order.",
"Most of the recent work (Gong et al., 2016; Lo-geswaran et al., 2018; Cui et al., 2018) tries to leverage the contextual information but has the limitation of predicting the entire sequence of the order.",
"This has the drawback that the prediction at the current time step is dependent on the prediction of the previous time step.",
"Another limitation of the prior work is the availability of good sentence representations that can help in determining the relative order between two sentences.",
"For this work we frame the task as a constraint learning problem.",
"We train a model which learns to predict the correct constraint given a pair of sentences.",
"The constraint learnt by our model is the relative ordering between the two sentences.",
"Given a set of constraints between the sentences of a document, we find the right order of the sentences by using sorting techniques.",
"Since we don't attach a score to an order, we don't have to consider all the permutations of an order.",
"Our main contribution is a new framing for the sentence ordering task as a constraint solving problem.",
"We also propose a new and simple approach for this task in this new framework.",
"We show that a simple sorting technique can outperform the previous approaches by a large margin given that it has good sentence representations.",
"The bottleneck for most of the hierarchical models is memory required by the representations of all the sentences and the representation of the paragraph.",
"The new framing also obviates these memory issues.",
"The code can be found at https://github.com/shrimai/ Topological-Sort-for-Sentence-Ordering .",
"Additionally, we introduce a human evaluation for this task and show that our model outperforms the state-of-the-art on all the metrics.",
"For our task we have a set of N documents D = { d 1 . . . .",
", d N } .",
"Let the number of sentences in each document d i be denoted by v i , where i , v i > = 1 .",
"Our task can be formulated as If we have a set { s o 1 , . . . , s o vi } of v i sentences in a random order where the random order is o = [ o 1 , . . . , o v i ] , then the task is to find the right order of the sentences o = [ o 1 , . . . , o v i ] .",
"Prior work (Logeswaran et al., 2018; Cui et al., 2018) learns to predict the sequence of the correct order o .",
"In this formulation of the task, we have C i set of constraints for document d i .",
"These constraints C i represent the relative ordering between every pair of sentences in d i .",
"Hence, we have |C i | = (cid:0) v i 2 (cid:1) .",
"For example, if a document has four sentences in the correct order s 1 < s 2 < s 3 < s 4 , then we have six set of constraints { s 1 < s 2 , s 1 < s 3 , s 1 < s 4 , s 2 < s 3 , s 2 < s 4 , s 3 < s 4 } .",
"Constraints C i are learnt using a classifier neural network described in ( 2.2).",
"We finally find the right order o using topological sort on the relative ordering between all the C i pairs of sentences.",
"Topological sort (Tarjan, 1976) is a standard algorithm for linear ordering of the vertices of a directed graph.",
"The sort produces an ordering o of the vertices such that for every directed edge u v from vertex u to vertex v , u comes before v in the ordering o .",
"We use the depth-first search based algorithm which loops through each node of the graph, in an arbitrary order.",
"The algorithm visits each node n and prepends it to the output ordering o only after recursively calling the topological sort on all descendants of n in the graph.",
"The algorithm terminates when it hits a node that has been visited or has no outgoing edges (i.e. a leaf node).",
"Hence, we are guaranteed that all nodes which depend on n are already in the output ordering o when the algorithm adds node n to o .",
"We use topological sort to find the correct ordering o of the sentences in a document.",
"The sentences can represent the nodes of a directed graph and the directed edges are represented by the ordering between the two sentences.",
"The direction of the edges are the constraints predicted by the classifier.",
"For example, if the classifier predicts the constraint that sentence s 1 precedes s 2 , then the edge s 1 s 2 would be from node of s 1 to s 2 .",
"This algorithm has time complexity of O ( v i + |C i | ) for a document d i .",
"In our current formulation, all the constraints are predicted before applying the sort.",
"Hence, we have to consider all the |C i | = (cid:0) v i 2 (cid:1) edges in the graph.",
"The time complexity of our current formulation is O ( v 2 i ) .",
"But the same technique could be adopted using a Merge Sort (Knuth, 1998) algorithm in which case the time complexity would be O ( v i log v i ) .",
"In this case, the sort algorithm is applied first and the constraint is predicted only for the two sentences for which the relative ordering is required during the sort time.",
"We build a classifier to predict a constraint between two sentences s 1 and s 2 (say).",
"The constraint learnt by the classifier is the relative ordering between the two sentences.",
"Specifically, the classifier is trained to predict whether s 2 follows s 1 or not i.e the the classifier predicts the constraint s 1 < s 2 .",
"BERT based Representation.",
"(B-TSort)",
"We use the Bidirectional Encoder Representations from Transformers (BERT) pre-trained uncased language model (Devlin et al., 2019) and fine-tune it on each dataset using a fully connected perceptron layer.",
"Specifically, we leverage the Next Sentence Prediction objective of BERT and get a single representation for both sentences s 1 and s 2 .",
"The input to the BERT model is the sequence of tokens of sentence s 1 , followed by the separator token [SEP]', followed by the sequence of tokens for sentence s 2 .",
"We use the pooled representation for all the time steps 1 .",
"LSTM based Representation.",
"(L-TSort)",
"In this model we get two separate representations h 1 and h 2 for s 1 and s 2 from a bi-directional LSTM encoder, respectively.",
"We pass the concatenation of h 1 and h 2 as input to two layers of perceptron for constraint prediction.",
"This model is trained to gain insight on the contribution of pre-trained sentence representations for the constraint prediction formulation of the task.",
"This section describes the datasets, the evaluation metric and the results of our experiments.",
"The hyper-paramater settings are reported in Apendix.",
"NSF.",
"NIPS, AAN abstracts.",
"These three datasets contain abstracts from NIPS papers, ACL papers, and the NSF Research Award Abstracts dataset respectively and are introduced in (Logeswaran et al., 2018).",
"The paper also provides details about the statistics and processing steps for curating these three datasets.",
"SIND caption.",
"We also consider the SIND (Se-quential Image Narrative Dataset) caption dataset (Huang et al., 2016) used in the sentence ordering task by (Gong et al., 2016).",
"All the stories in this dataset contain five sentences each and we only consider textual stories for this task.",
"Attention Order Network (AON).",
"This is the current state-of-the-art model (Cui et al., 2018) which formulates the sentence ordering task as a order prediction task.",
"It uses a LSTM based encoder to learn the representation of a sentence.",
"It then uses a transformer network based paragraph encoder to learn a representation of the entire document.",
"It then decodes the sequence of the order by using a LSTM based decoder.",
"BERT Attention Order Network (B-AON).",
"To have a fair comparison between our model and the AON model, we replace the LSTM based sentence representation with the pre-trained uncased BERT model.",
"This model plays a pivotal role of giving us an insight into how much improvement in performance we get only due to BERT.",
"Perfect Match (PMR): calculates the percentage of samples for which the entire sequence was correctly predicted (Chen et al., 2016).",
"PMR = 1 N (cid:80) Ni =1 1 { o i = o i } , where N is the number of samples in the dataset.",
"It is the strictest metric.",
"Sentence Accuracy (Acc): measures the percentage of sentences for which their absolute position was correctly predicted (Logeswaran et al., 2018).",
"Acc = 1 N (cid:80) Ni =1 1 v i (cid:80) v i j =1 1 { o ij = o ij } , where v i is the number of sentences in the i th document.",
"It is a also a stringent metric.",
"Kendall Tau (Tau): quantifies the distance between the predicted order and the correct order in terms of the number of inversions (Lapata, 2006).",
"= 1 2 I/ (cid:0) v i 2 (cid:1) , where I is the number of pairs in the predicted order with incorrect relative order and [ 1 , 1] .",
"Rouge-S: calculates the percentage of skip-bigrams for which the relative order is predicted correctly (Chen et al., 2016).",
"Skip-bigrams are the total number of pairs (cid:0) v i 2 (cid:1) in a document.",
"Note that it does not penalize any arbitrary gaps between two sentences as long as their relative order is correct.",
"Rouge S = 1 ( vi 2 ) Skip ( o ) Skip ( o ) , where the Skip ( . ) function returns the set of skip-bigrams of the given order.",
"Longest Common Subsequence (LCS): calculates the ratio of longest common sub-sequence (Gong et al., 2016) between the predicted order and the given order (consecutiveness is not necessary, and higher is better).",
"Human Evaluation We introduce a human evaluation experiment to assess the orders predicted by the models.",
"We set up a manual pairwise comparison following (Bennett, 2005) and present the human judges with two orders of the same piece of text.",
"The judges are asked Pick the option which is in the right order according to you.",
"They can also pick a third option No Preference' which corresponds to both the options being equally good or bad.",
"In total we had 100 stories from the SIND dataset 2 annotated by 10 judges.",
"We setup three pairwise studies to compare the B-TSort vs AON order, B-TSort vs Gold order and AON vs Gold order (Gold order is the actual order of the text).",
"Each judge annotated a total of 30 stories, 10 in each of the above mentioned categories.",
"The judges were naive annotators.",
"Table 1 shows the results of the automated metrics for the NIPS and SIND datasets 3 .",
"It shows that AON 4 model gains on all metrics when the sentence embeddings are switched to BERT.",
"The L-TSort model which does not utilize BERT embeddings comes close to AON performance on Rouge-S and Tau metrics.",
"This demonstrates that the simple L-TSort method is as accurate as AON in predicting relative positions but not the absolute positions (PMR and Acc metric).",
"Table 1 shows that our method B-TSort does not perform better 2 We choose SIND because all the stories contain 5 sentences and hence it is easy to read for the judges.",
"The orders of the stories are easier to judge as compared to the orders of scientific abstracts like NSF, NIPS and AAN as they require the judges to have an informed background.",
"3 We fine-tune BERT which is memory intensive.",
"Hence, we show the results of B-AON only on these two datasets as they need 2 transformer layers for paragraph encoder (Cui et al., 2018) 4 We use the code provided by the authors to train the AON and B-AON model.",
"The numbers reported in Table 1 and 2 are our runs of the model.",
"Hence, they differ from the numbers reported in the paper (Cui et al., 2018).",
"only due to BERT embeddings but also due to the design of the experiment.",
"Note that BERT has been trained with the Next Sentence Prediction objective and not the sentence ordering objective like AL-BERT (Lan et al., 2020).",
"We believe that framing this task as a constraint solving task will further benefit from pre-trained language model like AL-BERT.",
"Table 2 shows results for the NSF and AAN datasets and the B-TSort model performs better than the AON model on all metrics.",
"Table 3 shows results for the three human evaluation studies on the SIND dataset.",
"It shows that human judges prefer B-TSort orders 10% more number of times than the B-AON orders 5 .",
"The reference order may not be the only correct ordering of the story.",
"The variability in the orders produced by B-TSort and B-AON is not very high and hence in comparison with Gold orders, we don't see much difference in human preferences.",
"The low scores of AON could be due to the fact that it has to decode the entire sequence of the order.",
"The search space for decoding is very high (in the order of v i ! ).",
"Since our framework, breaks the problem to a pairwise constraint problem, the search space for our model is in the order of v 2 i .",
"Discussion: We perform additional analysis to determine the displacement of sentences in the predicted orders of the models, scalability of the models for longer documents, and an understanding of quality of the human judgements.",
"5 Examples of B-TSort and B-AON orders are shown in Table 6 and 7 for SIND and NIPS dataset in Appendix.",
"Displacement of sentences in predicted orders is measured by calculating the percentage of sentences whose predicted location is within 1, 2 or 3 positions (in either direction) from their original location.",
"A higher percentage indicates less displacement of sentences.",
"We observed that in spite of lack of a global structure, B-TSort consistently performs better on all datasets for all three window sizes as shown in Table",
"4. Observe that as window size reduces, the difference between B-TSort and B-AON percentages increases.",
"This implies that displacement of sentences is higher in B-AON despite taking the whole document into account.",
"We additionally perform a comparison of models on documents containing more than 10 sentences and the results are shown in Table",
"5. B-TSort consistently performs better on all the metrics.",
"SIND dataset is omitted in these experiments as the maximum number of sentences in the story is five for all the stories in the dataset.",
"For each dataset, the Tau difference for longer documents is much higher than the Tau difference on the overall dataset (Ta-ble 1 and 2).",
"This implies that B-TSort performs much better for longer documents.",
"Note that the AON model generates the order and hence need not generate positions for all the sentences in the input.",
"We calculate the percentage of mismatches between the length of the input document and the generated order.",
"For AON model on the NSF dataset which has longest documents, the overall mismatch is 5.85% (Table 4), while the mismatch for documents with more than 10 sentences is 11.60%.",
"The AON model also produces an overall mismatch of 0.84 % on AAN documents while producing a mismatch of 5.17% on longer AAN documents.",
"Similarly, the B-AON model has an overall mismatch of 3.48% for NIPS dataset, and 33.33% mismatch for longer documents.",
"This problem does not arise in our design of the task as it does not have to stochastically generate orders.",
"To better understand the choices of human judges, we observe the average length of stories Model PMR Acc Tau Rouge-S LCS NIPS abstracts B-AON 0.0 29.18 0.51 74.64 63.81 B-TSort 0.0 39.43 0.74 83.26 71.68 NSF abstracts AON 2.12 21.42 0.41 67.45 55.47 B-TSort 0.67 28.57 0.64 68.46 64.86 AAN abstracts AON 0.0 22.70 0.40 68.90 56.19 B-TSort 0.0 36.86 0.69 78.52 72.01 Table 5: Analysis on NIPS, NSF and AAN datasets for documents longer than 10 sentences.",
"calculated in number of tokens.",
"For the B-TSort vs B-AON study, we discover that the average length of the stories for B-TSort, B-AON and No Preference' chosen options is 86, 65 and 47 respectively.",
"This means that B-TSort is better according to human judges for longer stories.",
"Similarly for B-TSort vs Gold experiment, the human judges were confused with longer stories, reiterating that B-TSort performs well with long stories.",
"We have shown a new way to design the task of sentence ordering.",
"We provide a simple yet efficient method to solve the task which outperforms the state of the art technique on all metrics.",
"We acknowledge that our current model has the limitation of not including the entire context of the paragraph while making the decision of the relative order of the pairs.",
"Our future work is to include the paragraph representation in the constraint prediction model.",
"This will help our methodology to have the benefit of making informed decision while also solving constraints.",
"This work was supported in part by ONR Grant N000141812861, NSF IIS1763562, and Apple.",
"We would also like to acknowledge NVIDIAs GPU support."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"Information integration from different modalities is an active area of research.",
"Human beings and, in general, biological neural systems are quite adept at using a multitude of signals from different sensory perceptive fields to interact with the environment and each other.",
"Recent work in deep fusion models via neural networks has led to substantial improvements over unimodal approaches in areas like speech recognition, emotion recognition and analysis, captioning and image description.",
"However, such research has mostly focused on architectural changes allowing for fusion of different modalities while keeping the model complexity manageable.",
"Inspired by neuroscientific ideas about multisensory integration and processing, we investigate the effect of introducing neural dependencies in the loss functions.",
"Experiments on multimodal sentiment analysis tasks with different models show that our approach provides a consistent performance boost.",
"Human beings perceive the world as a unified whole, not in individual sensory modalities.",
"While traditionally different sensory models have been studied in isolation, it has been well recognized that perception operates via integration of information from multiple sensory modalities.",
"Research in multimodal fusion aims to achieve a similar goal in artificial models: extract and integrate all information from different input modalities.",
"For example, if someone is sarcastic, the facial expression and voice intonation provide information not directly decipherable from the uttered words.",
"If a model only looks at the text of the interaction, then it is unlikely to classify this interaction currently.",
"Current research in deep multimodal fusion primarily deals with architectural improvements to create complex feature-rich, yet efficient representations (Zadeh et al., 2017; Liu et al., 2018; Hazarika et al., 2020).",
"The hope is that more complex models will be able to integrate the complementary information from different unimodal representations into a unified common representation.",
"Learning such unified representations, however, is a challenging task.",
"Different modalities can present the same information in radically different ways with emphasis on different aspects of the content.",
"These heterogeneities across different modalities mean that learning multimodal representations must deal with feature shifts, distributional effects, nuisance variation and a variety of related challenges (Baltruaitis et al., 2018).",
"Inspiring from work in multisensory neural processing, we define a loss regularizer that we call synergy to train these models.",
"Synergy has a specific meaning in information-theoretic literature (Cover, 1999).",
"The synergy between random variables X and Y refers to the unique mutual information that X provides about Y .",
"While our loss function is not the same as information theoretic synergy, the intuition behind our proposed loss is the same as actual synergy; to try to maximize dependencies between the representations.",
"As our method uses neural networks or kernel-based methods to capture distributional divergences, we expect that this method will allow our model to capture complex dependencies which cannot be captured via techniques like subspace alignment.",
"We test our proposed training loss on different multimodal fusion architectures including LFN(Zadeh et al., 2017), MFN (Zadeh et al., 2018a), MAGBERT(Rahman et al., 2020) and MIM (Han et al., 2021).",
"Our experiments show that training with synergy maximization improves the result by a significant margin.",
"In this section, we give an overview of the basic ideas relevant to this work; primarily mutual information, and existing work on deep multimodal fusion and neural synergy.",
"The problem in the most abstract terms is a supervised learning problem.",
"We are provided with a dataset of N observations D = ( x i , y i ) Ni =1 .",
"All x i come from a space X and y i from Y .",
"We are provided a loss function L : Y Y R which is the task loss.",
"Our goal is to learn a model F : X Y such that the total loss L = (cid:80) i L ( F ( x i ) , y i ) is minimized.",
"In multimodal fusion the space of inputs X naturally decomposes into K different modalities X = (cid:81) Kj =1 X j .",
"We use X j to represent random variables which form the individual modality specific components of the input random variable X .",
"A common way to learn such a multimodal function is to decompose it into two components:",
"a) an embedding component E which fuses information into a high dimensional vector in R d and",
"b) a predictive component P which maps vector from R d to Y .",
"Furthermore since the different modalities are often no directly compatible with each other (for eg text and image), E itself is decomposed into",
"a) modality specific readers F i X i R d i which are specifically designed for each individual modality X i and",
"b) a fusion component F : (cid:81) i R d i R d which fuses information from eah individual modality embedding.",
"F is provided with uni-modal representations of the inputs X i = ( X 1 , X 2 , . . . XK ) obtained through embedding networks f i .",
"F has to retain both unimodal dependencies (i.e relations between features that span only one modality) and multi-modal dependency (i.e relationships between features across multiple modalities).",
"This decomposition has two advantages",
"a) the individual modality reader can be pre-trained on the task at hand or even from a larger dataset (for example BERT (Devlin et al., 2018) for language, Resnet (He et al., 2016) for images ) which allows us to leverage wider modality specific information and",
"b) often but not always each individual modality is in principle enough to correctly predict the output 2.2 Distributional Divergences Divergence is a functional which characterizes the distance or \"discrepancy\" between two probability distributions on the same space.",
"Divergence however is a different notion than distance because divergences are not necessarily symmetric.",
"A common measure of discrepancy between two distributions is the Kullback-Liebler divergence (KL divergence) (Cover, 1999).",
"This divergence is often also used implicitly for estimating dependence between two random variables.",
"Mutual information (MI) is a measures of dependence between two random variable X and Y capable of incorporating multiple types of relationships between them.",
"If we have variables X and Y , then the mutual information between them is given by I ( X ; Y ) = KL [ p XY ( x, y ) p X ( x ) p Y ( y )] where p XY is the joint probability density of the pair ( X, Y ) , and p X , p Y are the marginal probability densities of X, Y respectively.",
"Estimation of Divergence Estimating entropic differences between two distributions purely from their samples is a difficult task (Kinney and Atwal, 2014).",
"As such there have been multiple types of divergences proposed over the years (Gretton et al., 2005; Studen`y and Vejnarov, 1998).",
"Moreover in recent years, several estimators have been proposed for entropic divergences based on variational methods (Belghazi et al., 2018; Hjelm et al., 2018; Amjad and Geiger, 2019).",
"These estimators use flexible neural networks as a contrast function and optimize a variational bound.",
"We describe two such methods which are used in our experiments Neural Mutual Information (Belghazi et al., 2018) is a variational method to estimate the KL divergence between two distributions.",
"It is estimated via gradient ascent on the Donsker-Varadhan bound (Donsker and Varadhan, 1985).",
"The Donsker Varadhan bound shows that: KL ( P, Q ) sup g EX P [ g ( X )] EX Q [exp g ( X ) ] The Young-Fenchel duality shows that the gap is zero; i.e. at the optima the right side of the above expression matches the KL divergence.",
"Instead of a global maximization over all functions one can instead use a family of 1168 functions parameterized via neural networks.",
"The bound obtained thus is necessarily lower than the actual KL, but now one can use gradient descent to optimize the network.",
"Maximum Mean Discrepancy or MMD (Gretton et al., 2012) is a kernel based estimator of divergence between distributions.",
"Mathematically the MMD between two distributions P and Q is given by the norm of the difference of the mean embeddings of P and Q in the RKHS space of the chosen kernel.",
"Further extensions to MMD have been developed based on neural networks which provide non-universal but more powerful kernel based tests (Liu et al., 2020).",
"The above formula can be estimated purely via samples by using the Kernel matrix K ( x i , x j ) = ( x i ) T ( x j ) where represents the corresponding RKHS embedding function The final monte carlo estimator is given by:",
"Kurtosis is a statistical measure which is used to categorize the behavior of the distribution tails.",
"It is more sensitive to rare events and hence is used for distributions with \"fatter tails\".",
"For univariate variables, kurtosis is the standardized fourth moment i.e E [( X ) 4 ] ( E [( X ) 2 ]) 2 It is often used to measure deviations from normality.",
"random variables is also sometimes used as a measure of dependence between them.",
"It is one of the metrics used by Rosas et al. (2019); Barrett and Seth (2011) to analyze neural complexity and brain functional connectivity.",
"Earlier work on neural fusion models primarily relied on an early fusion of features.",
"These approaches simply concatenated inputs of different modalities and used simple models to combine requisite information.",
"Despite their simplicity, such models often perform well and are robust (Narayanan et al., 2019).",
"More modern methods, however, deploy fancier methods to induce information aggregation.",
"One set of models used gradient descent to try to force different feature networks to learn about each other and embed information jointly.",
"This process can be enhanced by adding specific forms of regularization such as reconstruction loss (Mai et al., 2020), or auxiliary task loss (Chen et al., 2017; Yu et al., 2021).",
"Another family of models uses linear algebra based methods to combine unimodal representations.",
"Methods like those of Liu et al. (2018); Chen and Mitra (2018); Chachlakis et al. (2019) try to fuse information via tensor decomposition of high dimensional product tensors of individual unimodal representations.",
"Other methods use subspace alignment (Lee et al., 2019; Yu et al., 2012) or correlation loss (Sun et al., 2020; Hazarika et al., 2020) to merge different representations.",
"However, in some form or other, these models rely primarily on architectural changes.",
"We, on the other hand, do not want to focus on such changes.",
"Instead, our goal was to use insights from neuroscience to provide a methodology that can be deployed atop any standard multimodal fusion model.",
"A common and vital feature of nervous systems is the integration of information arriving simultaneously from multiple sensory pathways.",
"The underlying neural structures have been found to be related in both vertebrates and invertebrates.",
"The classic understanding of this process is that different sensory modalities are processed individually and then combined in various multimodal convergence zones, including cortical and subcortical regions (Ghazanfar and Schroeder, 2006), as well as 1169 Figure 1: A general multimodal fusion Architecture.",
"Studies in the superior colliculus (Meredith et al., 1987) showed that multiple sensory modalities are processed in this brain stem region, with some neurons being exclusively unimodal and others being multimodal.",
"Hypotheses of encoding of multimodal information include changes in neuronal firing rates (Pennartz, 2009) or a combinatorial code in population of neurons (Osborne et al., 2008; Rohe and Noppeney, 2016).",
"Evidence shows that while multimodal representations are distinct from unimodal ones, there is sufficient overlap between the set of neurons that process different sensory modalities.",
"For example, Follmann et al. (2018) show that even in a simple crustacean organism, more than half the neurons in the commissural ganglion are multimodal.",
"Moreover, they show that in 30% of these multimodal neurons, responses to one modality were predictive of responses to other modalities.",
"Both these facts suggest that the neural representations across different modalities have high information about each other.",
"Studies of multisensory collicular neurons suggest that their crossmodal receptive fields (RF) often overlap (Spence et al., 2004).",
"This pattern is also found in multisensory neurons present in other brain regions.",
"As such, a spatiotemporal hypothesis of multisensory integration has been suggested: superadditive multimodal processing is observed when information from different modalities comes from spatiotemporally overlapping receptive fields (Recanzone, 2003; Wallace et al., 2004; Stanford et al., 2005).",
"Since multimodal cortical neurons are generally downstream of modality-specific regions, the information about RF overlap is present in their input unimodal neural representations.",
"Moreover, the sensory-specific nuclei of the thalamus have been shown to feed multisensory information to primary sensory specific-cortices (Kayser et al., 2008).",
"This suggests the existence of explicit feedback connection from the multimodal representations to unimodal representations .",
"Cortical and subcortical networks often contain clusters of strongly connected neurons.",
"Functionally the existence of such cliques imply highly integrated pyramidal cells that handle a disproportionately large amount of traffic (Harriger et al., 2012).",
"In cortical circuits, around 20% of the neurons account for 80% of the information propagation (Nigam et al., 2016; Van Den Heuvel and Sporns, 2011).",
"Timme et al. (2016); Faber et al. (2019) demonstrate that multimodal computation tends to concentrate in such local cortical clusters.",
"They also found significantly lower kurtosis in such clusters and that dependence between oscillations was proportional to the amount of information flow.",
"Sherrill et al. (2020) show that highly kurtotic neural activity positively related when multiple external stimuli are provided.",
"Thus, kurtosis in neural firings is a representation of the dependence between inputs.",
"This suggests that when input kurtosis is high there is more significant cognitive processing and information flow required to extract relevant 1170 information .",
"For our purposes we will limit ourselves to talk about tasks similar to the MOSI dataset.",
"In this setting the input has three modalities viz audio ( a ), visual ( v ), and textual language ( l ).",
"The fusion problem involves learning a representation M f that combined the uni-modal representations of the inputs X a,v,l = ( X a , X v , X l ) .",
"We modify the base neural architecture to incorporate the global structure explained in the last section.",
"We propose a way to incorporate such changes without major architectural change into current baseline designs.",
"The key component is the additional network (colored in red) in Figure 1 which we shall call as C-network.",
"The C-network takes as input the individual unimodal representations and the fused representation and attempts to force a specific form of dependency as explained below.",
"C-Network The purpose of the C-Network is to try to enforce on the model the three primary characteristics of real neural circuits explained in the earlier section.",
"We list them here and describe how we attempt to incorporate those characteristics in a more standard model.",
"Individual uni-modal representations should be predictive of other uni-modal representations.",
"We try to achieve this by simply predicting on modality representation by the combination of others.",
"Q i refers to a modality associated neural network which attempts to reconstruct the unimodal representation Z i from the other representations Z i .",
"The error between the two is penalized in the form of a reconstruction loss between modalities i.e. we add a penalty of the form: LL 2 = || Q i ( Z i ) Z i || 2 Multimodal representation should be feedback into input neurons to align and capture information between them.",
"Providing feedback during inference time from the multimodal representation would be ideal.",
"However this would make the overall prediction recurrent, something fundamentally different from most current architectures.",
"Moreover given current high dimensional encoders; doing such processing would be extremely resource intensive.",
"As such we aim to achieve this feedback by treating the multimodal representation and unimodal representation spaces as different domains and adding a loss of the form: L d = d ( p ( g i ( Z i )) , p ( g i ( Z ))) The purpose of the aforementioned loss is to align the distributions of the features in the same embedding space of the mapping from the multimodal and unimodal domains.",
"d represents a measure that captures the discrepancy between the distributions, g i refers to neural networks for projecting and aligning the combined representation Z with unimodal representations Z i , and p denotes the empiri-cal/sample distribution of the corresponding features.",
"In our experiments, for d we use the MMD discrepancy (Gretton et al., 2012) and KL divergence as the metric; though other divergences can also be used.",
"Note that this loss by itself can be minimized by forcing the g functions to ignore their inputs.",
"We prevent this by first doing a random projection of the features 1 into a smaller dimensional vector space and then apply an invertible neural network.",
"Such alignment losses have been used in works on domain adaptation (Motiian et al., 2017) under the name semantic loss or confusion loss.",
"We refer the readers to Motiian et al. (2017); Li et al. (2019) for more details on semantic losses.",
"Note that instead of aligning the features via some kind of embedding based distributional distance, one could try to maximize mutual information between the embeddings as well.",
"We experiment with one such model in our experiment and as the results show, found it to be slightly worse than using MMD based alignment loss.",
"Individual unimodal and multimodal representations should have low kurtosis.",
"To ensure this condition we estimate the multivariate kurtosis by plugging in standard estimators for the mean and covariates.",
"The final kurtosis estimator used is given by: 1 similar to Johnson Lindenstrauss projections (Landweber et al., 2016) 1171 = 1 n n (cid:88) i [(( z i z ) TS 1 ( z i z )) 2 ] where z i here are samples from the Z features in the model (where Z can be unimodal features like Z a or fused final feature Z ).",
"z refers to the empirical mean feature z = n (cid:80) i z i n and S is the empirical covariance matrix S = n (cid:80) i ( z i z )( z i z ) T n .",
"An important thing to note here is that high dimensional kurtosis values can be highly sensitive to outliers.",
"As such we regularize the estimate by doing three things:",
"a) We cap the max norm of the difference vectors during estimation.",
"b) We scale up the diagonal of the covariance matrix to reduce its condition number",
"c) Finally the covariance matrix itself is computed via a decaying moving average over a window of multiple batches to produce smoother estimates before the inversion operation.",
"During training we add the regularization penalties described earlier along with the usual maximum likelihood based objective.",
"The different loss components are weighted with seperate hyper-parameters.",
"Note that the C-Network is purely a training time addition, and is not invoked during inference.",
"Hence the additional network invoke zero additional time during testing.",
"An algorithmic description of the full method is presented in the Appendix D 5 Experiments 5.1 Datasets We empirically evaluate our methods on two commonly used datastes for multimodal training viz CMU-MOSI and CMU-MOSEI.",
"CMU-MOSI (Wllmer et al., 2013) is sentiment prediction taks on a set of short youtube video clips.",
"CMU-MOSEI (Zadeh et al., 2018b) is a similar dataset consisting of around 23k review videos taken from YouTube.",
"The output in both cases is a sentiment score in [ 3 , 3] .",
"For each dataset, three modalities are available; audio, visual frames, and language.",
"Preliminary features on each modality is obtained as follows: Audio: Features are extracted from the sund recordings using the method of Degottex et al. (2014).",
"Language: The video transcripts are converted to word embeddings using BERT (Devlin et al., 2018) or Glove (Pennington et al., 2014) Visual: Visual features are extracted using FACET (iMotion) which provides facial action units vectors.",
"We run our experiments with the following architectures: FLSTM (Narayanan et al., 2019) is the baseline early fusion LSTM architecture used by Zadeh et al. (2017) Tensor Fusion Network or TFN (Zadeh et al., 2017) combined information via pooling of a high dimensional tensor representation of multimodal features.",
"More specifically it does a multimodal Hadamard product of the aggregated features with RNN based language features.",
"Memory Fusion Network or MFN (Zadeh et al., 2018a) incorporate gated memory-units to store multiview representations.",
"It then performs an attention augmented readout over the memory units to combine information into a single representation.",
"MAGBERT (Rahman et al., 2020) is a transformer based architecture that uses the Wang gate (Wang et al., 2019).",
"The multimodal information is send to the multimodal gate to compute modified embeddings which are passed to a BERT (Devlin et al., 2018) based model.",
"This model achieves state-of the-art results on multimodal sentiment benchmark MOSI (Wllmer et al., 2013) and MOSEI (Zadeh et al., 2018c).",
"MIM (Han et al., 2021) is a recent near SOTA architecture.",
"It combined BERT based text embeddings with modality specific visual and acoustic LSTMs (Hazarika et al., 2020).",
"Recently Colombo et al. (2021) conducted experiments introducing a information regularizer on existing architectures.",
"method are",
"a) our method focuses on synergy terms whereas their proposal is optimizing joint mutual information between different unimodal representations; and",
"b) they experiment with variational measures of information.",
"We replicate our experiments with their best performing model and present the results with the label I Was .",
"We report both the Mean Absolute Error (MAE) and the correlation of model predictions with true labels.",
"In the literature, the regression task is also turned into a binary classification task for polarity prediction.",
"We follow Rahman et al. (2020) Accuracy Acc 7 denotes accuracy on 7 classes and Acc 2 the binary accuracy) of our best performing models.",
"We also report the Mean Absolute Error (MAE) and the correlation of model intensity predictions with true values.",
"We present and discuss here the results obtained in our experiments.",
"Results on MOSI are presented in Table 2 while Table 3 present results for MOSEI dataset.",
"We trained each of the models with the standard cross entropy loss (labeled as NLL); and with cross entropy loss regularized with the synergy penalty discussed earlier.",
"On both datasets, regularization via synergy leads to performance improvement.",
"For example, a MFN on CMU-MOSI trained with MMD based synergy (NLL+S MMD ) outperforms by more than 4 points on Acc 7 than standard likelihood training.",
"On CMU-MOSEI too the gains are significant when trained with synergy regularization.",
"In general training via MMD synergy tends to be better than via KL synergy.",
"This might be the inherent behavior of the MMD dependency which is always well defined; or it might reflect the hardness of information estimation.",
"For example it is well known that good bounds on standard mutual information are difficult to obtain (Kin-Acc 7 Acc 2 MAE CORR FLSTM NLL 31.2 75.9 1.01 0.64 NLL+S KL 31.6 76.3 1.01 0.66 NLL+S MMD 33.6 76.4 0.98 0.66 MFN NLL 31.3 76.6 1.01 0.62 NLL+S KL 32.5 76.6 0.94 0.65 NLL+S MMD 35.9 77.4 0.95 0.66 NLL+I Was 35.1 77.1 0.97 0.63 LFN NLL 31.9 76.9 1.01 0.64 NLL+S KL 32.6 77.6 0.97 0.64 NLL+S MMD 35.4 77.9 0.97 0.67 NLL+I Was 32.4 77.6 0.97 0.64 MAGBERT NLL 40.2 83.7 0.79 0.80 NLL+S KL 41.9 84.1 0.76 0.82 NLL+S MMD 41.9 85.6 0.76 0.82 NLL+I Was 41.8 84.2 0.76 0.82 MIM NLL 46.3 83.7 0.77 0.76 NLL+S KL 46.4 83.7 0.74 0.75 NLL+S MMD 46.7 84.2 0.72 0.79 NLL+I Was 46.6 84.2 0.75 0.79 Table 2: Results on sentiment analysis on CMU-MOSI. Acc 7 denotes accuracy on 7 classes and Acc 2 the binary accuracy. MAE denotes the Mean Absolute Error and Corr is the Pearson correlation ney and Atwal, 2014); while MMD estimator are asymptotically consistent (Gretton et al., 2012) 5.5 Modality Dropout Zadeh et al. (2018a); Rahman et al. (2020) have demonstrated that while multimodal fusion does improve performance, the primary modality continues to be textual data.",
"Hence in this experiment, we want to assess the effect of corruptions of text modality in our model.",
"Following Colombo et al. (2021) we experiment with dropping the text modality either by itself (T) or with one of the other modalities (T+V or T+A).",
"The results are presented in Table 4 Since the C-Networks forces a reconstruction and distributional divergence loss between the unimodal and multimodal representations, one would expect that models trained using our approach would be more resistant to modality errors.",
"This is borne out in the experiments, where we see that 1173 Acc 7 Acc 2 MAE CORR FLSTM NLL 44.1 75.1 0.72 0.52 NLL+S KL 44.4 75.6 0.70 0.52 NLL+S MMD 45.3 76.0 0.68 0.54 MFN NLL 44.3 74.7 0.72 0.52 NLL+S KL 44.3 74.8 0.72 0.56 NLL+S MMD 46.2 75.1 0.69 0.56 NLL+I Was 45.1 75.2 0.72 0.54 LFN NLL 45.2 74.3 0.70 0.54 NLL+S KL 46.1 75.3 0.69 0.56 NLL+S MMD 46.3 75.3 0.67 0.56 NLL+I Was 45.9 75.1 0.69 0.55 MAGBERT NLL 46.9 83.9 0.59 0.77 NLL+S KL 47.4 85.3 0.59 0.79 NLL+S MMD 47.9 85.4 0.59 0.79 NLL+I Was 47.2 85.0 0.59 0.78 MIM NLL 53.3 79.6 0.54 0.75 NLL+S KL 53.5 80.3 0.54 0.77 NLL+S MMD 54.3 82.4 0.52 0.77 NLL+I Was 53.5 82.1 0.53 0.77 Table 3: Results on sentiment analysis on CMU-MOSEI.",
"Note that the C-network itself is not active at test time; instead this effect is due to the alignment forced by the network during training.",
"An interesting future direction would be to explicitly use the C-network outputs to ameliorate modality corruption.",
"Our overall proposal has multiple components viz",
"a) the reconstruction loss (also called LL 2 loss);",
"b) the distribution alignment loss (which we call L d Loss); and",
"c) the kurtosis loss L .",
"As such we ran experiments to assess the importance of each component.",
"Specifically we trained the model without each of the three loss components prescribed in our method, and assessed the test performance.",
"The results are presented in Appendix A. First we note the performance improvement by incorporating kurtosis in the regularization which shows the efficacy of this term.",
"Second one can also note that removing any individual component leads to reduction in performance, suggesting all components act together in a synergistic way to improve the results.",
"In this paper, we used the idea of regularizing via a term which we label neural synergy maximization.",
"This regularizer is inspired by neural cicruit design in the vertebral cortex.",
"We experimented with different measures of synergy based on discrepancy measures such as KL and MMD.",
"We also show that training with synergy can produce benefit on even SOTA architectures.",
"Limitations The most prominent limitation of this approach, is that it is inherently limited by the architecture with which it is being used.",
"While our additional loss did improve performance, one can observe that the final performance is dependent on the initial performance.",
"For example, while we tested on four architectures, the final performance of each model was in the same range as the initial performance.",
"An entirely different architecture can possibly improve over our results.",
"On the other hand our approach is model agnostic and applicable on any model trained only via max-likelihood."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"result",
"method"
] |
[
"Real-world natural language processing (NLP) models need to be continually updated to fix the prediction errors in out-of-distribution (OOD) data streams while overcoming catastrophic forgetting.",
"However, existing continual learning (CL) problem setups cannot cover such a realistic and complex scenario.",
"In response to this, we propose a new CL problem formulation dubbed continual model refinement (CMR).",
"Compared to prior CL settings, CMR is more practical and introduces unique challenges (boundary-agnostic and non-stationary distribution shift, diverse mixtures of multiple OOD data clusters, error-centric streams, etc.).",
"We extend several existing CL approaches to the CMR setting and evaluate them extensively.",
"For benchmarking and analysis, we propose a general sampling algorithm to obtain dynamic OOD data streams with controllable non-stationarity, as well as a suite of metrics measuring various aspects of online performance.",
"Our experiments and detailed analysis reveal the promise and challenges of the CMR problem, supporting that studying CMR in dynamic OOD streams can benefit the longevity of deployed NLP models in production.",
"1 1 Introduction Fine-tuning large pre-trained language models (LMs) has become the de facto standard for training models of a variety of tasks in natural language processing (NLP).",
"These success stories are usually in places where the training and testing data are drawn from the same distribution.",
"However, in real-world scenarios, a deployed model (e.g., a question answering service) often encounters examples that are out of the training distribution (i.e., out-of-distribution, OOD ).",
"Such distribution shift often leads to a high error rate.",
"In practice, it is highly The work was done when Bill was an intern at FAIR.",
"preferred to continually refine deployed models whenever new errors are reported and annotated, in order to reduce their further negative impacts.",
"In spite of its importance, the challenge of continually refining a model over OOD data streams has been underexplored.",
"Prior work in continual learning (CL) has primarily focused on task-incremental settings with boundary-aware data streams.",
"These CL methods are usually evaluated on simple models and data (e.g., image classification with MNIST) (Aljundi et al., 2019).",
"It is not clear to what extent they can efficiently refine a model in boundary-agnostic streams for a complex language task (e.g., reading comprehension) with modern LMs.",
"In addition, there is no existing evaluation protocol for comprehensively comparing the collection of applicable methods for such a practical and complex problem.",
"Traditional CL paradigms mainly focus on incrementally learning a model from a data stream with a sequence of distinct tasks with explicit delineation, which is rather unrealistic in real-world NLP applications.",
"To address these research questions, we propose a novel CL formulation named continual model refinement ( CMR ), which aims to efficiently update a model for error correction in an out-of-distribution data stream without catastrophically forgetting its acquired knowledge over time.",
"In contrast to prior CL setups, CMR targets learning a model of a particular task (e.g., question answering) from its prediction errors in dynamic OOD data streams.",
"Instead of assuming that the streams are drawn from a fixed unseen distribution, we study CMR under a more general and realistic scenario, where the underlying distribution of OOD data streams is non-stationary across time steps without clear boundaries while being diverse at every time step.",
"In this paper, we focus on studying whether existing methods can address CMR and how we should benchmark and analyze their performance.",
"We first formulate the CMR problem with several ba-3128 sic metrics covering multiple desiderata for a CMR method: the ability to instantly fix known errors, the retention of previously acquire knowledge from upstream/online data, and the generalization to unseen OOD data (Sec. 2).",
"Then, we propose a general method to create the dynamic data streams of the aforementioned characteristics and evaluation metrics to benchmark CMR methods, yielding a comprehensive evaluation protocol for CMR (Sec. 3).",
"We employ and extend several suitable methods from the CL literature to study the CMR problem, which is based on parameter regularization or memory replay (Sec. 4).",
"We have conducted a comprehensive analysis with extensive experimental results, which reveal many interesting, non-trivial findings (Section 5).",
"For example, we find that even though replay methods are generally better than regularization-based methods, EWC (Kirkpatrick et al., 2017), a typical regularization method, achieves the best score in generalizing to unseen OOD data.",
"We also find that a simple variant of ranking criteria in conditional replay methods achieves more stable results.",
"Moreover, we find that different CMR methods have orthogonal improvements and our positive initial results suggest that integrating regularization terms for replay methods is a promising future direction to develop advanced CL methods to address CMR.",
"In this section, we formally introduce the proposed continual learning setup, continual model refinement (CMR).",
"We first define the notations and describe the learning objectives that are also illustrated in Fig. 1, then we design a few basic evaluation metrics for assessing CMR methods, and finally, we briefly discuss the unique challenges compared to other CL formulations.",
"Upstream learning.",
"Suppose that we want to build a question answering (QA) model.",
"To do this, we usually need to offline fine-tune a large pre-trained LM with the existing QA data we have now.",
"Formally, we denote a dataset with D = { ( x i , y i ) } , consisting of the examples are drawn from an upstream distribution U , i.e., D U .",
"The fine-tuned LM is named upstream model f 0 .",
"Query streams.",
"After the model f 0 is deployed in production, it is common to see ever-changing distribution shifts in real-world data.",
"We use !",
"{ Q 1 , . . . , QT } to denote the arriving examples grouped in T episodes and call this sequence of datasets as a query stream .",
"We discuss our method of creating such challenging query streams for evaluating CMR in Sec. 3.2 and Alg.",
"1.",
"Error streams.",
"In real-world scenarios, the size of Q t can be very large even in a short period of time, and it is unrealistic to assume that we can annotate all of them to refine the model f t 1 .",
"A common practice is to only annotate the ones that are reported as prediction errors or bugs.",
"Motivated by this, we use E t to denote the examples in Q t that are predicted incorrectly by f t 1 .",
"This thus forms an evolving, dynamic stream of prediction errors { E 1 , . . . , ET } , where E t = { ( x, y ) Q t | f t 1 ( x ) = y } .",
"Learning objectives.",
"To improve the user satisfaction over time, we need a continual model refinement (CMR) method g that can efficiently take the model f t 1 and E t as input and then output a refined model f t for processing future examples.",
"We expect f t to output correct answers for the known errors E t immediately while maintaining its correct predictions on previous questions that are answered correctly.",
"We also want the refined models to keep their generalization ability to unseen future data in the stream.",
"Sec. 2.2 shows the metrics to assess a CMR method g toward these goals.",
"We use five metrics to describe the desiderata for CMR methods and assess them quantitatively.",
"We show how to use these metrics for benchmarking in a comprehensive yet concise way in Sec. 3.3.",
"Error-fixing rates (EFR).",
"To assess the responsiveness of the error-fixing methods, we look at how many errors can be fixed right away.",
"We define the instant error-fixing rate at time step t as: EFR ( t ) =: Acc ( f t , E t ) =: |{ ( x, y ) E t | f t ( x ) = y }| | E t | .",
"Knowledge retention (UKR&OKR).",
"We define two metrics below to assess how much knowledge acquired from upstream or online data streams that the model maintains over time: UKR ( t ) =: Acc( f t , D ) and OKR ( t ) =: Acc( f t , Q <t ) , where Q <t = (cid:83) t 1 i =1 Q i .",
"We down-sample D and Q <t and compute periodically for efficiency.",
"Cumulative success rates (CSR).",
"To monitor the model performance on incoming query examples, we compute a running average of success rates at past time steps: CSR ( t ) =: 1 | E <t | / | Q <t | .",
"Knowledge generalization (KG).",
"As we only have a finite number of episodes for experiments, to assess the model performance in the future episodes, we test the models with a held-out set of test examples, H , that are drawn from the same underlying distributions which are used to create the query stream.",
"That is, KG ( t ) =: Acc( f t , H ) .",
"Without loss of generality, we suppose that Q t O t , where {O t } denotes an ever-changing series of unseen distributions.",
"Typical task-incremental CL problem setups such as LAMOL (Sun et al., 2020) and CLIF (Jin et al., 2021) consider Q t and Q t +1 are sampled from two distinct tasks.",
"Therefore, the distribution shifts are sudden (i.e., O t and O t +1 does not share any overlapping components).",
"Also, in conventional CL formulations, the past distribution will never be revisited, which is rather unrealistic in real-world applications.",
"They do not have the concept of error stream either.",
"Instead, the proposed CMR formulation is essentially a boundary-agnostic CL problem in non-stationary data streams, where the distribution shifts are more dynamic, unpredictable, and diverse, yielding a more realistic yet challenging CL setup.",
"We provide a comprehensive evaluation protocol for studying continual model refinement in OOD",
"streams.",
"This section first briefly describes our selected task and datasets (Sec. 3.1), then focuses on our proposed method to sample non-stationary OOD data streams (Sec. 3.2), and finally, illustrate how we use the basic metrics to benchmark various CMR methods in a comprehensive yet concise way.",
"In this paper, we mainly use extractive question answering (i.e., machine reading comprehension) to evaluate and analyze CMR methods, while one could also study the CMR problem in any NLP tasks with the proposed protocol.",
"We use the MRQA-19 benchmark (Fisch et al., 2019) which consists of 6 datasets sharing the same formats.",
"We use the SQuAD (Rajpurkar et al., 2016) as the upstream data for offline training the base LM, and use the other five parts as the OOD data for continual learning: NQ (Kwiatkowski et al., 2019), HotpotQA (Yang et al., 2018), SearchQA (Dunn et al., 2017) and TriviaQA (Trischler et al., 2017).",
"This is because SQuAD is more commonly used for deploying models in production and the real-world QA examples from online users can be more similar to the distribution of NQ and SearchQA.",
"Here we discuss how to create a realistic ever-changing series of distributions (i.e., {O t } in Sec. 2.3) for creating query streams { Q t } .",
"Background.",
"A common practice in CL to create a controllable non-stationary data stream is to control the context-switching probability.",
"For example, OSAKA (Caccia et al., 2020), as a representative method, uses a Markov chain to sample a sequence of tasks with a constant transition probability and then sample the examples from the selected task at each time step.",
"Despite its simplicity, this method is nevertheless limited to the cases where query stream Q t can only be drawn from a single distribution, which can be unrealistic.",
"Instead, it is common that the online data at a time step are from multiple underlying OOD data clusters, each of which has a different feature distribution, thus yielding a more diverse and challenging environment for continual model refinement.",
"Also, it is often that in the early stage of the model deployment, the query streams still contain examples of the upstream distribution U , and the ratio of such in-distribution examples will decay over time.",
"Our proposed method.",
"Motivated by these practical considerations, we propose a novel sampling algorithm to control the dynamics of query streams, aiming to encourage diversity and model the decaying upstream distribution.",
"We consider that there are N underlying data clusters, { V 1 , . . . , VN } , each of which corresponds to an unseen distribution, and we have V 0 U which is a data set sampled from the upstream distribution.",
"Our key motivation is to sample the target Q t from three sources: the in-distribution data cluster V 0 , the data of a major OOD cluster V c t , and the mix of other remaining OOD data clusters V = c t .",
"As shown in Alg.",
"1 , we have three key configuration arguments ( , , ) for controlling the dynamics of the query stream:",
"1) is the decaying factor for the ratio of in-distribution data,",
"2) is the transition probability of the Markov chain for deciding the index of the major OOD cluster c t , and",
"3) is to control the diversity by adding data from remaining OOD clusters; T is the number of episodes and b is size of Q t .",
"Fig. 2 shows examples of query streams and associated error streams.",
"Overall measurement.",
"Recall that there are five basic metrics in Section 2.2, namely EFR (instnat error-fixing rate), UKR (upstream knowledge re-tention), OKR (online knowledge retention), CSR (cumulative success rate) and KG (knowledge gen-eralization).",
"To have a comprehensive yet concise analysis of CMR methods, we report the average and final values of these metrics.",
"Specifically, we use X to denote the average scores in the metric X (e.g., X=UKR) over all time steps, and X (T) to Algorithm 1: Sampling query streams with controllable non-stationarity from multiple data clusters.",
"denote the score at the final time step.",
"Reporting both can help us quickly assess the trend of performance of f t in addition to its final performance.",
"Besides these fine-grained scores, we also provide an overall evaluation criterion (OEC) by taking the average of the four scores except for the EFRs 2 , i.e., OEC = average( UKR, OKR, CSR, KG ) .",
"Validation/testing streams.",
"To evaluate CMR methods (introduced later in Sec. 4), we use the method in Alg.",
"1 to sample multiple streams under the same configurations (i.e., T, b, , , and { V i } ) and then split them as validation streams and testing streams.",
"The validation streams are used to pick the best hyper-parameters of each CMR method (e.g., the of regularization-based methods 2 Note that we report EFR scores separately because it computes on the method-specific errors unlike other metrics that test on same examples for all CMR methods. 3131 and the size of R t in replay methods) and then they are evaluated on the same set of testing streams.",
"We first introduce our base LM and then illustrate several typical continual learning methods with our extensions to make them applicable to the CMR problem.",
"We discuss other relevant yet not suitable methods in Related Work (Sec. 6).",
"Base model.",
"Pretrained text-to-text language models, such as BART (Lewis et al., 2020) and T5 (Raffel et al., 2020), are commonly used for studying a wide range of NLP tasks.",
"This is because they are generally applicable to tasks that can be formulated as a text-to-text problem, and that they show better generalization potential (Ye et al., 2021; Wei et al., 2021; Sanh et al., 2021).",
"We thus employ the text-to-text formats to pre-process all data in our experiments and use BART-base as the base model.",
"We find the BART-base model is a great fit to support our extensive experiments for its relatively smaller size and comparable upstream performance versus its alternatives.",
"Thus, we use it for our experiments to ensure the scalabil-ity of our analysis and the generality of our findings.",
"Note that we do not aim to offline train a perfect upstream model f 0 with the upstream dataset D .",
"Instead, we focus on the CMR methods that can continually refine a given upstream model.",
"Continual fine-tuning.",
"The most straightforward method is to always use a vanilla optimizer (e.g., Adam (Kingma and Ba, 2015)) to fine-tune f t 1 with a small learning rate on E t for a few epochs, aiming to minimize the loss L Error ( t ) of fine-tuned model f t on E t .",
"Such refined models f t should be able to output correct outputs for these known errors.",
"This method may overfit these errors and thus forget previously acquired knowledge.",
"We introduce a few regularization methods next.",
"A common solution to preventing forgetting is to add a temporal regularization term to the loss for continual fine-tuning: L total ( t ) = L Error ( t ) + L Reg ( t ) , so that the parameter changes from f t 1 to f t are restricted to avoid over-fitting.",
"where t is the parameters of f t .",
"This regularization term mitigates the forgetting issue by applying a penalty for every parameter change.",
"Online EWC.",
"Elastic weight consolidation (Kirkpatrick et al., 2017) is a typical regularization method for CL.",
"Unlike L2Reg which gives an equal penalty to every parameter change, EWC produces a weighted penalty such that the parameters that are more important to the previous tasks will have larger penalty weights, leading the parameter changes to find an overlapping space where both previous knowledge and new knowledge can be stored in the parameters.",
"In particular, it efficiently estimates the Fisher Information Matrices F ( t ) ii and use them for consolidating the weighted penalty: LEWC ( t ) = t 1 (cid:88) j =1 (cid:32) 1 2 (cid:88) i F ( j ) ii (cid:0) it it 1 (cid:1) 2 (cid:33) .",
"We here employ an extension of EWC by keeping a running sum of F ii to avoid the growth of computation cost in the online setting.",
"Experience replay.",
"ER (Rolnick et al., 2019) is a simple yet effective replay method that stores the previous examples into a growing memory module M .",
"Then, we periodically (every k time steps) sample a small subset of the memory R t as additional training examples for model refinement.",
"It uses a two-stage process: fine-tune f t 1 on R t to get f t 1 and then fine-tune f t 1 on E t to get f t .",
"Maximally interfered replay (MIR).",
"Instead of randomly selecting R t from M , MIR (Aljundi et al., 2019) aims to replay the most forgettable examples, conditioning on the current information: f t 1 and E t .",
"It samples a small candidate pool C M and then ranks the examples in C by their interference scores .",
"Finally, the R t of MIR is the subset of C with the largest scores.",
"To compute interference scores, we first fine-tune f t 1 on E t to get a virtual model f t .",
"Then, we compute the 3132 loss of f t 1 and f t on each example in C to get the interference scores (i.e., the loss delta): score( x i , y i ) =: loss( f t ( x i ) , y i ) loss( f t 1 ( x i ) , y i ) .",
"MaxLoss replay.",
"Inspired by Jiang et al. (2019) and Kawaguchi and Lu (2020) that show learning with the examples with largest losses can enhance the learning efficiency, we propose a variant of the MIR by redefining the scoring function to score ( x i , y i ) =: loss( f t ( x i ) , y i ) and call it MaxLoss, which takes the examples that have largest losses on the virtual model f t (instead of the largest delta in MIR).",
"Extension for CMR.",
"(1) Bi-Memory: There are two types of knowledge that we want to maintain in CMR: the knowledge acquired in upstream and online learning respectively.",
"Considering that the upstream data is much larger than the incoming errors, it is thus not reasonable to use a single memory module as in other CL problems.",
"We thus use two separate memory modules M u and M o where the upstream memory is M u = D and the online memory M o grows by adding E t .",
"(2) Mixed-Tuning : Instead of following the two-stage method of using R t , we choose to mix R t and E t for fine-tuning f t 1 .",
"Both modifications are supported by their better empirical results.",
"We first present the setup in Sec. 5.1, and report our main results in Table 1 and Figure 3, which we use to discuss our key findings in Sec. 5.2 to 5.5.",
"Please note that there are other additional results in Appendix, and we will release our codebase and full experimental logs to support reproducibility.",
"Reference range.",
"To get a reference range of the performance, we set up two reference methods.",
"1) FrozenUpstream : We always use the upstream model (i.e., f t f 0 ) for inference at every time step.",
"2) OfflineRefining : We combine all the errors of f 0 as E T and then offline fine-tune the model f 0 with D + E T , where D is a subset of D , to directly get the final refined model f T .",
"Hyper-parameters.",
"We here use a normal configuration of the streams (i.e., T =100, b =64, =0.9, =0.5, =0.8) for studying the CMR methods and discuss other extreme configurations briefly in Sec. 5.5 and more in Appendix.",
"To select the optimal hyper-parameters of each method (e.g., the learning rate, training epochs, method-specific arguments, etc.), we use grid search and pick the ones with the best overall score on validation streams.",
"We report the results in Table 1 & Figure 3, and organize our findings by answering a coherent list of analysis questions: (Q1-Q7) .",
"(Q1)",
"Can we fix errors without forgetting?",
"From the EFR column, we can see that all methods can achieve a 95+% instant error-fixing rate, meaning that they can indeed quickly fix most of the known errors.",
"However, they tend to forget the previously fixed errors and even examples that are correctly predicted before in the query stream.",
"An oracle method that does not forget the previously acquired knowledge would have an OKR (T) of nearly 100%, while the OKR (T) of the continual fine-tuning method is only 77 .",
"7% .",
"The issue of forgetting both online and upstream knowledge in the continual fine-tuning baseline is quite serious.",
"Notably, its OKR (T) is much lower than its OKR (83.87 77.73), and similarly for UKR (T) and UKR (72.05 66.21).",
"The curves in Figure 3 also suggest that the forgetting issue can be increasingly more serious over time, and it does not show any trend to diminish after T .",
"This confirms that studying the CMR problem is of great importance for enhancing deployed NLP models.",
"(Q2)",
"How well do CMR methods mitigate the forgetting issue?",
"All tested CMR methods can indeed mitigate forgetting without lowering down the EFRs, but they behave quite differently.",
"The regularization methods (i.e., Online L2Reg and Online EWC) are better at improving OKRs rather than UKRs, while replay methods enhance both OKRs and UKRs quite well.",
"For example, MaxLoss can achieve the best OKR (T) ( 91 . 0% ) while having a UKR (T) that is even slightly better than the FrozenUpstream model (80.6 vs 80.3).",
"Moreover, we find that MaxLoss and MIR have great potential to continually improve knowledge retention in the future.",
"From both curves in Fig. 3 and Table 1 (i.e., the comparisons between UKR/OKR and UKR (T) /OKR (T) ), we can see they tend to have better scores in the later stages, but the retention scores of regularization-based methods are decreasing over time.",
"We have a detailed discussion on replay-based methods in Q4 .",
"(Q3)",
"Can refined models generalize to unseen OOD data?",
"Recall that CSRs evaluate the incoming yet not touched examples over time in the stream and the KGs evaluate the held-out examples that are not in the stream.",
"Both metrics thus test on OOD examples that are unseen to the refined model at that time.",
"Compared to the FrozenUpstream baseline, we see all methods have large performance gains (from 30% to 50+% in CSR (T) and KG (T) ).",
"The MIR w/ Online L2Reg even achieves the best CSR (T) and it is significantly better than others, showing that learning with replay effectively improves the generalization ability.",
"From the KG and KG (T) columns of these CMR methods (and Fig. 3), we can see that refined models are increasingly more generalizable to held-out unseen data over time as well.",
"However, the differences among these methods in these two metrics are not obvious, although they are all better than the continual fine-tuning baseline.",
"Interestingly, the regularization method OnlineEWC gets the best score of KG (T) , even though its CSR (T) is worse than others.",
"This suggests that learning with replay might hurt the held-out knowledge generalization, but regularization could maintain a better generalization ability in the long run.",
"(Q4) How should we replay the memory?",
"We find that increasing the replay frequency (i.e., setting a smaller replay interval k ) can largely improve the overall performance for ER, MaxLoss, and MIR.",
"This is expected as there are more fine-tuning steps over the retrieved data.",
"However, the reason for such improvement varies among them.",
"Increasing the replay frequency primarily benefits ER's UKR (T) , but not for other metrics, and it even causes a lower OKR (T) .",
"Instead, MaxLoss and MIR also benefit from larger OKR (T) (MaxLoss: 84.77 89.26; MIR: 87.50 90.43).",
"This suggests that conditional replay methods can get more important stored memory to replay than ER's random selections.",
"Thus, it is promising to develop more advanced conditional replay methods for CMR.",
"(Q5)",
"Are larger buffer sizes always better for conditional replay methods?",
"Larger buffer sizes (i.e., c=256 512 1024) can increase MaxLoss's UKR (T) and OKR (T) with a large margin and thus produce better overall scores.",
"However, MIR with larger buffer sizes suffers from decreasing UKR (T) and OKR (T) .",
"This indicates that that delta of loss as the ranking criteria is less stable than using the virtual loss itself (i.e., MaxLoss).",
"This finding conflicts with the MIR experiments on MNIST-based task-aware streams (Aljundi et al., 2019).",
"We thus conjecture it is because our streams are more complex and the loss landscapes of the task are significantly different from the toy datasets used for evaluation in many prior CL works (e.g., image classification over shuffled MNIST).",
"(Q6) Do different CMR methods produce similar refined models?",
"We use Figure 4 to visualize the differences among the refined models produced by selected CMR methods in two different periods.",
"We can see the refined models by continual fine-tuning (CFT) and regularization methods are more similar to each other, and all replay methods are Stream Dynamics CFT EWC ER MxLs MIR =0.9, =0.5, =0.8 15.81 19.54 20.86 20.78 21.63 =0.9, = 0.1 , =0.8 23.40 24.14 26.32 26.05 26.04 =0.9, = 0.9 , =0.8 18.61 19.38 20.78 19.51 20.60 =0.9, =0.5, = 0.5 19.97 20.10 21.97 23.01 22.04 =0.9, =0.5, = 0.2 17.37 16.22 19.15 20.60 19.45 Table 2: The gain of OEC (T) over the Frozen Upstream baseline for each method under different stream dynamics.",
"quite distinct from other methods.",
"Also, the divergence among different methods rapidly increases from t = [10 , 20] to t = [30 , 40] .",
"Therefore, we believe that the improvement of these CMR methods is orthogonal to each other, especially between regularization and replay methods.",
"replay methods?",
"Inspired by Fig. 4 and findings in (Q3) , we add an initial experiment by combining the MIR and OnlineL2Reg and show its performance in Table 1.",
"Interestingly, we indeed observe this combination produces a noticeable improvement over both MIR and OnlineL2Reg , yielding the state-of-the-art performance in OEC (T) scores.",
"To the best of our knowledge, there is little prior work that has studied the effect of integrating regularization in (conditional) replay methods, and our initial results suggest that this is a very promising direction for future research .",
"Our above analysis is based on the results of a normal stream configuration (i.e., =0.9, =0.5, =0.8), but can such tuned hyper-parameters of CMR methods directly apply to streams of extreme configurations?",
"In Table 2, we briefly compare the gain of the previous CMR methods in terms of their OCE (T) improvement over the vanilla FrozenUpstream baseline under a few extreme settings of We find that, in general, all replay methods are still better than continual fine-tuning and Online EWC.",
"ER shows more stable results in extreme settings (e.g., = 0.1 or 0.9) but MIR and MaxLoss (MxLs) are more sensitive to the non-stationarity yet less sensitive to the diversity.",
"Continual Learning for NLP.",
"Recently, continual learning (or lifelong learning) has drawn attention in the NLP field (Biesialska et al., 2020; Sun et al., 2020; Wang et al., 2019; Huang et al., 2021; Jin et al., 2021).",
"However, most of these works 3135 follow the traditional task-incremental , boundary-aware , never-revisiting CL setup, which is not directly beneficial to most of the real-world scenarios of deployed NLP models.",
"For example, the CLIF formulation (Jin et al., 2021) focuses on learning over a sequence of different NLP tasks with few-shot data so that the trained model can generalize better to unseen tasks.",
"In contrast, the proposed CMR in this work is a particularly novel CL setup where we focus on continually refining a model with its prediction errors in OOD data streams, thus yielding a boundary-agnostic, dynamically non-stationary environment for CL methods to work.",
"Such fundamental differences between CMR and traditional CL setups make it difficult to directly apply many CL methods that are based on boundary-aware streams, especially for those who require learning task representations.",
"CMR vs. OSAKA The OSAKA (Caccia et al., 2020) problem is similar to the CMR in that we both focus on CL in non-stationary boundary-agnostic data streams.",
"However, it does not consider the distribution diversity inside each time step or the decay of upstream distribution in the online setting.",
"Our sampling method (Alg.",
"1) fills the gap and yields a more realistic CL setup.",
"In addition, the data streams of CMR are always the prediction errors of the latest model, thus producing a naturally evolving and adversarial environment for CL methods to explore.",
"Moreover, the experiments of OSAKA are limited to simple networks and tasks such as MNIST, but our work uses pretrained Transformer LMs and the QA task, and thus we believe our analysis and findings are more useful for the NLP community and beyond.",
"Model Refinement.",
"Model refinement has recently become an emerging topic in NLP, but existing works have mainly been limited to offline editing time-sensitive factual knowledge in pretrained LMs (Zhu et al., 2020; De Cao et al., 2021; Mitchell et al., 2021).",
"In contrast, our work studies the model refinement in an online continual learning setting and for downstream NLP tasks such as reading comprehension and natural language inference.",
"Jang et al. (2021) attempt to study the knowledge editing problem at a larger scale, but its problem formulation only contains two time-steps, thus being significantly different from CMR.",
"Dhingra et al. (2021) propose a simple method to jointly model text with its timestamp so that the trained language models can be calibrated when new knowledge arrives, while CMR focuses on the error cases from OOD data streams where the timestamps have little correlation with the skills we want the deployed model to learn.",
"Besides, Yao et al. (2021) propose a method of learning from explanations to fix prediction errors, which shares similar high-level motivation but has few direct connections to our focus in this work.",
"In this paper, we propose a novel continual learning formulation named continual model refinement (CMR).",
"The CMR problem aims to efficiently fix prediction errors when learning in out-of-distribution data streams without catastrophically forgetting the acquired knowledge.",
"For studying such a realistic and complex problem, we presented a dedicated evaluation protocol with a general method to create non-stationary, diverse OOD data streams for analysis.",
"Also, we design multiple evaluation metrics to deliver a comprehensive yet concise measurement of CMR methods.",
"The proposed CMR problem with our comprehensive analysis opens up a range of new opportunities for studying continual learning problems that are closer to real-world applications for the NLP community and beyond.",
"For example, based on our results and analysis about (Q3) and (Q6) , we find that it is promising to study how we can integrate both regularization methods and replay methods for mitigating the forgetting issue while improving the generalization ability.",
"The analysis about (Q5) suggests that developing more stable ranking criteria is also important to conditional replay methods (e.g., our simple extension MaxLoss can outperform MIR under specific settings).",
"Developing CMR methods of which the configurations can generalize to diverse types of streams is also an important challenge.",
"We release our codebase and processed datasets for supporting the reproducibility of our experiments and future research."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"result",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"objective",
"result",
"objective",
"abstain",
"abstain"
] |
[
"Yifan Hou Department of Computer Science ETH Zurich [email protected]",
"Abstract NLP has a rich history of representing our prior understanding of language in the form of graphs.",
"Recent work on analyzing contextualized text representations has focused on hand-designed probe models to understand how and to what extent do these representations encode a particular linguistic phenomenon.",
"However, due to the inter-dependence of various phenomena and randomness of training probe models, detecting how these representations encode the rich information in these linguistic graphs remains a challenging problem.",
"In this paper, we propose a new information-theoretic probe, Bird's Eye , which is a fairly simple probe method for detecting if and how these representations encode the information in these linguistic graphs.",
"Instead of using classifier performance, our probe takes an information-theoretic view of probing and estimates the mutual information between the linguistic graph embedded in a continuous space and the contextualized word representations.",
"Furthermore, we also propose an approach to use our probe to investigate localized linguistic information in the linguistic graphs using perturbation analysis.",
"We call this probing setup Worm's Eye .",
"Using these probes, we analyze BERT models on their ability to encode a syntactic and a semantic graph structure, and find that these models encode to some degree both syntactic as well as semantic information; albeit syntactic information to a greater extent.",
"Our implementation is available in https://github.com/ yifan-h/Graph_Probe-Birds_Eye .",
"Graphs have served as a predominant representation for various linguistic phenomena in natural language (Marcus et al., 1993; De Marneffe et al., 2006; Hockenmaier and Steedman, 2007; Hajic et al., 2012; Abend and Rappoport, 2013; Ba-narescu et al., 2013; Bos, 2013).",
"These graph based representations have served our intuition for representing both language structure (Chomsky, 1957) as well as meaning (Koller et al., 2019).",
"With the growing popularity of pretrained language models that build contextualized text representations (Reid et al., 2020; Devlin et al., 2019, inter alia), various probing models have been introduced to understand if and how our linguistic intuitions are encoded in these representations.",
"These probes train supervised models to predict pieces of linguistic information such as POS (part-of-speech), morphology, syntactic and semantic relations, and other local or long-range phenomena in language (Belinkov et al., 2017; Conneau et al., 2018; Hewitt and Manning, 2019; Tenney et al., 2019b; Jawahar et al., 2019).",
"However, it is still an open question if these representations somehow encode entire linguistic graph structures such as dependency and constituency parse trees or graph structured meaning representations such as AMR (Abstract Meaning Representation), UCCA (Universal Conceptual Cognitive Annotation), etc.",
"A popular recent work, the structural probe (He-witt and Manning, 2019), has investigated how contextualized representations encode syntax trees.",
"They tested if a linear transformation of the net-work's word representation space can predict particular features of the syntax tree, namely, the distance between words and depth of words in the tree.",
"Thus, the structural probe cannot by itself answer the question if these representations encode entire linguistic graph structures.",
"Moreover, the structural probe is only designed for tree structures and cannot be extended to general graphs.",
"In this work, we introduce a new probing approach, Bird's Eye , which can be used to detect if contextualized text representations encode entire linguistic graphs.",
"Bird's Eye is a simple information-theoretic probe (Pimentel et al., 2020b) which first encodes the linguistic graph into a conFigure 1: Methodology of Bird's Eye : To probe pretrained language models, linguistic graphs are embedded in a continuous space and the mutual information between graph embeddings and word representations is calculated.",
"tinuous representation using graph embedding approaches (Cai et al., 2018) and then, estimates the mutual information between the linguistic graph representation space and the contextualized word representation space.",
"An illustration of the probe approach is given in Figure 1. The information theoretic approach is more reliable than training a probe and using accuracy for probing as it is debatable if the classifier-based probe is probing or trying to solve the task (Hewitt and Liang, 2019; Pimentel et al., 2020b).",
"We further extend Bird's Eye to probe for localized linguistic information in the linguistic graphs such as POS or dependency arc labels in dependency parses.",
"We call this probe, Worm's Eye .",
"In our experiments, we first illustrate the reliability of our probe methods and show the randomness of previous probe methods that use accuracy.",
"Then, we use Bird's Eye to detect syntactic and semantic structures in BERT, showing that much syntactic and some semantic structure are encoded in BERT.",
"Besides, we also use Worm's Eye to probe for specific linguistic information in syntactic trees and semantic graphs respectively to see which kinds of localized linguistic information is encoded in BERT.",
"Our probing results are consistent with previous probe methods (Hewitt and Manning, 2019; Reif et al., 2019; Liu et al., 2019; Tenney et al., 2019a,b; Wu et al., 2021).",
"We also discuss limitations of our probe and how future work can build upon our foundation.",
"In this section, we introduce our information-theoretic approach for probing linguistic graph structures in word representations.",
"The MI estimate is used to understand how much of the information in the linguistic graph structure has been learnt by the pretrained models.",
"Let X = { x 1 , . . . , x T } denote an input sentence (each x i is the contextual embedding of a token in the given vocabulary V ) and G denote the corresponding linguistic graph.",
"Furthermore, let X denote a random variable that takes values ranging over all possible token sequences in V .",
"Correspondingly, let G denote a random variable that ranges over all possible corresponding linguistic graphs.",
"We use I ( X ; G ) to denote the linguistic structure information that is included in the given word representations.",
"Note that the MI value I ( X ; G ) is always non-negative, and a large MI implies that more of the structure information is encoded in the word representations.",
"In order to make the MI computation easier, we additionally assume alignments between the nodes V in the graph G and the words in X .",
"This alignment is one to one, for example, in dependency parsing (Marcus et al., 1993) but an aligner might be needed in some cases (Banarescu et al., 2013).",
"There are three main challenges in estimating MI in our setting.",
"First, the MI estimation of discrete graphs and continuous features has been an elusive problem (Ross, 2014; Kraskov et al., 2004; Escolano et al., 2017), since there is no widely accepted definition of mutual information in this setting.",
"Second, the dimensionality of the contextualized word representations is very high.",
"Traditional methods (Moon et al., 1995; Steuer et al., 2002; Paninski, 2003) for MI estimation do not scale well with large sample size or dimension (Gao et al., 2015).",
"Getting accurate estimates of mutual information in the high dimension is not easy.",
"Third, graphs across different linguistic formalisms could have different entropy values, and thus the MI value I ( X ; G ) may be uncomparable across the different linguistic graph formalisms.",
"For example, if syntactic trees G and semantic graphs G (cid:48) have the same MI value with X i.e. I ( X ; G ) = I ( X ; G (cid:48) ) while the entropy values are fairly different i.e. I ( X ; G ) H ( G ) << H ( G (cid:48) ) , it is not proper to conclude that X contains the same amount of information from structures G and G (cid:48) , since they correspond to different percentages of the amount of uncertainty.",
"Thus, the MI values must be interpreted carefully.",
"Bird's Eye tackles the aforementioned diffi-culties by transforming the linguistic graphs into a continuous space using a graph embedding approach.",
"Then the MI between graph embeddings and word representations is estimated using a recently proposed method (Belghazi et al., 2018) which performs well even in high dimensions.",
"Finally, we also estimate a lower and upper bound of the MI, which is used to interpret the MI value.",
"We describe various stages of Bird's Eye below: 2.1 Graph Embedding The provided linguistic graphs can typically be represented as an adjacency matrix.",
"Directly calculating MI with the adjacency matrix is not useful due to the sparsity and discreteness of the adjacency matrix representation.",
"Thus, we transform the graphs into a continuous space where each node is represented by a continuous representation of same dimensionality.",
"Theoretically, if the graph embedding approach is perfect, we can use the invariant property of mutual information (Kraskov et al., 2004).",
"This property states that under some fairly strong conditions, there exists an invertible function f ( ) that satisfies G = f 1 ( f ( G )) , where the graph embeddings are Z = f ( G ) .",
"Thus, we can transform G into graph embeddings Z , and: I ( X ; Z ) I ( X ; G ) (1) In this paper, we use DeepWalk (Perozzi et al., 2014), which is based on the skip-gram model (Huang et al., 1993; Mikolov et al., 2013) for graph embeddings 1 .",
"Specifically, given a node v V encoded as the one-hot vector 1 v , the model tries to predict its neighbor's vector 1 v (cid:48) where v (cid:48) N v .",
"The graph G = { V, E } is first sampled to generate a set of random walks.",
"Then the graph neighborhood relationship is represented by the co-occurrence of nodes in the walk paths.",
"Finally, for all the walks, Word2vec (Mikolov et al., 2013) with skip-gram is used to maximize the co-occurrence likelihood 2 : L ( ) = (cid:89) v V (cid:89) v (cid:48) N v P ( 1 v (cid:48) | 1 v ; ) .",
"Note that the Bird's Eye probe is a general probe.",
"Other graph embedding approaches can also be selected for the transformation under specific conditions.",
"2 Note that in Word2vec, the window size is a hyperparameter that need to be selected by users.",
"Here, for simplicity, we set the window size as 1 .",
"Let Z = { z v | v V } denote the learnt graph embedding where z v is the embedding of node v .",
"Here denotes the concatenation operation.",
"In our experiments, we also explore to what extent the original linguistic graphs can be restored by the graph embeddings, which tests the extent to which eq.",
"1 holds and if we can use I ( X ; Z ) instead of I ( X ; G ) to estimate MI.",
"More details can be found in Appendix A 2.2 Mutual Information Estimation To estimate I ( X ; Z ) in high dimensions, we maximize the compression lemma lower bound (Baner-jee, 2006) as mentioned in Belghazi et al. (2018).",
"Specifically, for a pair of random variables X and Z , the mutual information is equivalent to the Kullback-Leibler (KL) divergence between the joint distribution PXZ and the product of the marginal distributions PX PZ : I ( X ; Z ) = DKL ( PXZ || PX PZ ) .",
"From the compression lemma lower bound (Baner-jee, 2006), the KL divergence DKL ( P || Q ) can be bounded as:",
"where F can be any class of functions T : R satisfying certain integrability constraints.",
"Thus, in the inequality 4, the lower bound can be obtained by finding a function in the set F : I ( X ; Z ) sup T F EPXZ [ T ] log( EPX PZ [ e T ]) .",
"To get a tight estimate of I ( X ; Z ) , we need the lower bound to be as high as possible.",
"Thus, the MI estimation problem turns into an optimization problem to maximize the compression lemma lower bound.",
"To ensure that, similar to Belghazi et al. (2018), we let F = { T } be the set of functions parametrized by a neural network, and optimize the neural network using stochastic gradient descent.",
"Formally, the objective function is: max ( EP ( n ) XZ [ T ] log( EP ( n ) X P ( n ) Z [ e T ])) .",
"Here, P ( n ) XZ , P ( n ) X and P ( n ) Z are empirical joint and marginal distributions over a sample of n (sentence, graph) pairs.",
"We calculate graph embeddings for each sentence independently, and regard one sentence as a mini-batch to optimize the neural network iteratively for MI estimation.",
"Note that different from existing probe models, our objective of the neural network is to find an optimal function in F and estimate MI, rather than use prediction accuracy.",
"Besides, the neural network is very simple (MLP).",
"Therefore, there is no need to split dataset into training and test to test generalization in MI estimation 3 .",
"The negative of the training loss as eq.",
"5 can be taken as MI estimation directly (Belghazi et al., 2018; Cristiani et al., 2020).",
"In our experiments, we verify the effectiveness of the MI estimation method to prove that the probe is stable.",
"More technical details of the MI estimation model and how it is trained are given in Appendix B. 2.3 Control Bounds Next, we introduce two control bounds to interpret the MI value, whose functions are similar to the control task introduced by Hewitt and Liang (2019).",
"As mentioned, comparing MI alone across different types of structures is not useful, since the entropy values of graph embeddings can also be different.",
"Thus, we calculate an upper and a lower bound of the MI value based on the graph structures.",
"Instead of using the MI value alone, we interpret it by its relative value in terms of the two control bounds.",
"Formally, for the MI between graph embeddings and word representations, we have: I ( R ; Z ) I ( X ; Z ) I ( Z ; Z ) .",
"The lower bound is the MI between a truly random variable R (i.e., independent of the graph) and the graph embedding Z .",
"Thus, I ( R ; Z ) = 0 .",
"The upper bound telescopes to the graph structure's self-entropy 4 .",
"Using these two control bounds, we interpret the structure information by the relative MI value 5 : MIG ( G ) = I ( X ; Z ) I ( R ; Z ) I ( Z ; Z ) I ( R ; Z ) , (7) The MI estimates I ( Z ; Z ) and I ( R ; Z ) can be obtained in the same way as I ( X ; Z ) (using the MI estimation method mentioned above).",
"MIG (eq. 7) 3 Alternatively, the dataset can be divided evenly into training and test for MI estimation.",
"4 Note that for continuous random variables Z , the number of values that Z can take is infinite.",
"In this condition, I ( Z ; Z ) tends to infinity.",
"Thus, we use a small noise (cid:15) and approximate it as I ( Z + (cid:15), Z ) .",
"5 The definition is similar to the uncertainty coefficient.",
"scales the MI value for graph embeddings with different self-entropy values into the same range: MIG ( G ) [0 , 1] .",
"Intuitively, MIG captures what percentage of the structure information is encoded in word representations.",
"Since MIG ( G ) is scaled using I ( R ; Z ) , it also helps reduce the error in MI estimation.",
"As mentioned, we maximize compression lemma lower bound 5 as the MI estimate.",
"However, there could be a gap between it and the ground-truth MI value.",
"Based on the fact that the ground-truth I ( R ; Z ) = 0 , we can know that the gap I ( R ; Z ) I ( R ; Z ) is equal to I ( R ; Z ) .",
"In MIG (eq. 7), the gap is added for both numerator and denominator, which reduces the error brought by MI estimation 6 .",
"Bird's Eye allows us to probe for entire lin-guisitic structures.",
"However, for us to have a complete understanding, we might also want to probe for some localized information in the linguistic graphs.",
"For example, we may want to know if BERT knows about POS tags or certain dependency relations in the syntax parse.",
"We formulate probing for localized linguistic information as probing for a subgraph of the linguistic graph and reuse our Bird's Eye probe for it.",
"We call this setting Worm's Eye as we are now analyzing if these representations capture local sub-structures.",
"To probe localized linguistic information G s = { V s , E s } , we use perturbation of the original structure for analysis.",
"Specifically, we add a perturbation to the original graph embedding Z based on the subgraph G s .",
"For all the nodes in V s or nodes connected by edges in E s , we add a noise on their corresponding node representations in Z .",
"Let Z (cid:48) denote the corrupted graph embedding.",
"Then, we define the following: MIL ( G s ) = 1 I ( X ; Z (cid:48) ) I ( R ; Z ) I ( X ; Z ) I ( R ; Z ) , (8) MIL describes how much MI is contributed by the local structure G s .",
"When the local structure is the whole graph, Z (cid:48) is completely noisy and MIL ( G s ) equals to 1 , which means the entire MI value I ( X ; Z ) is contributed by the local structure.",
"If the local structure is an empty set, we have 6 In our experiment, we show that the estimated values satisfy | I ( R ; Z ) | < 10 3 I ( Z ; Z ) , which is small enough to be ignored.",
"Z (cid:48) = Z .",
"Then we can get MIL ( G s ) = 0 , representing that the local structure does not contribute anything to the MI value.",
"If we control the perturbation of different types of local structures at the same level, we can compare how well they are captured by the word representations relative to each other using eq.",
"8.",
"Specifically, for relations with labels, e.g., types of dependency relations in syntax trees, we set the same perturbation on the graph embeddings.",
"Then, we test and compare MIL ( G s ) for different types of relations.",
"Larger MIL ( G s ) for a particular relation type implies that more information about this relation type is encoded in the word representations.",
"We use our Bird's Eye probe to detect two linguistic structures in the pretrained models, namely, dependency syntax (Marcus et al., 1993; De Marneffe et al., 2006) and a more semantic formalism, AMR (Banarescu et al., 2013).",
"We first use our model to probe for Stanford dependencies (de Marneffe et al., 2006).",
"For a sentence X with tokens { x 1 , x 2 ,",
"..x T } , the syntax tree defines a directed labelled tree where tokens x i are represented as nodes and relations among them as labeled edges.",
"We ignore the edge direction and labels for simplicity in our work 7 .",
"Future work can consider incorporating edge direction and labels.",
"We embed the given syntax tree into a continuous space as mentioned before.",
"Then, we calculate the MIG (eq. 7) as described before to determine how much syntax information is captured in the given contextualized representations.",
"Next, we test if contextualized representations capture a semantic graph representation the Abstract Meaning Representation (AMR) (Banarescu et al., 2013).",
"Different from syntactic trees, semantic graphs are not tree structured, and there can be loops or reentrencies.",
"In the AMR annotation, plurality, articles and tenses were dropped and thus, there is no one-to-one corresponding between words in the sentence and nodes in the AMR graph.",
"Thus, we use an off-the-shelf aligner (Pour-damghani et al., 2014) and calculate MI between the AMR graph embedding and the representations of those words that are aligned with a node in the AMR graph.",
"For simplicity, edge directions and 7 The Stanford dependency tree also contains one empty root node, which is also ignored labels are also ignored in this setting.",
"Our experiments mainly comprise of two parts: 1. Verification of the probe: The first part is for verification of the probing methodology and ensuring that the graph embeddings retain information about the linguistic graphs i.e eq.",
"1 holds.",
"We do this by testing if the graph embeddings can be used to restore the original graph.",
"2. Probing for graph structures: The second part is about using the probe to detect syntactic and semantic graph structures in BERT.",
"Importantly, we probe if pretrained BERT captures entire graph structures as well as specific relational information in these linguistic graphs.",
"To contrast with previous accuracy and training based probes, we also train a group of simple MLP models with different number of hidden layers and use accuracy for probing.",
"We show that designing and training a model to probe entire or localized linguistic structures is not as reliable as our information-theoretic approach.",
"We use gold annotations from the Penn Treebank and the AMR Bank for all our experiments.",
"For the contextualized word representations, we select pretrained BERT models, specifically BERT-base (uncased) and BERT-large (uncased).",
"Since BERT generates word-piece embeddings, to align them with gold word-level tokens, we represent each token as the average of its word-piece embeddings as in Hewitt and Manning (2019).",
"We also use two non-contextual word embeddings as baselines: GloVe embeddings (Pennington et al., 2014) and ELMo-0, character-level word embeddings with no contextual information generated by pretrained ELMo (Reid et al., 2020).",
"We first evaluate how well the graph embeddings can capture the linguistic graph structures by predicting the original graphs with them.",
"We use simple MLPs of 6 different settings with varying number of hidden layers.",
"More details can be found in Appendix C. We use AUC score as the metric to evaluate the graph prediction performance, which is a common metric in link prediction that computes area under the ROC curve (Fawcett, 2006).",
"The results are presented in Table 1. We can see that for both syntax trees and semantic graphs, MLPs can achieve good performance in restoring the original graph using graph embeddings where 0 5 10 15 20 25 BERT Hidden Layer Index 0.0 0.2 0.4 0.6 0.8 1.0 MIG lower bound upper bound GloVe ELMo0 BERT-base BERT-large 0 5 10 15 20 25 BERT Hidden Layer Index 0.0 0.2 0.4 0.6 0.8 1.0 MIG lower bound upper bound GloVe ELMo0 BERT-base BERT-large Figure 2: MIG scores with syntactic and semantic structures, respectively for word representations in BERT models (BERT-base with 12 layers and BERT-large with 24 layers).",
"Thus, we can be confi-dent that equation 1 holds, and we can calculate MI based on the graph embeddings.",
"Future work can explore better graph embedding approaches.",
"We also evaluate our probe by adding noisy representations to the graph embeddings to prove that it is capable of teasing out different levels of dependencies.",
"Details can be referred to in Appendix D. 4.2 Probing Entire Structures We first used the Bird's Eye probe to detect if entire linguistic structures are encoded in hidden representations of BERT 8 .",
"We also include two non-contextual word representations GloVe and ELMo-0 as baselines.",
"We report MIG as the results of our probe on the two graph structures in Figures",
"2(a) and",
"2(b).",
"The MIG estimations for syntactic structure probing of both BERT-base and BERT-large are quite high, which implies that BERT encodes much syntactic information.",
"However, for the semantic structure, the MIG scores of BERT models are lower, suggesting that BERT does not encode the semantic structures as well.",
"These two conclusions are consistent with previous works (Liu et al., 2019; 8 For all MI estimation experiments, we repeat the experiment 20 times and take the average to get stable results. Tenney et al., 2019b; Wu et al., 2021) which have found that unlike syntax, semantics is not captured well by the pretrained models.",
"We also observe an interesting trend when comparing MIG across layers.",
"We find that for syntax, MIG starts to decrease in the upper layers, especially for the BERT-large.",
"This is consistent with previous works which report that BERT models syntax more in the lower and middle layers (Ten-ney et al., 2019a).",
"For semantic graphs, MIG is steady across all layers.",
"It means that semantic information is spread across the entire model.",
"The results are consistent with existing work (Rogers et al., 2020).",
"For the two non-contextual baselines, GloVe and ELMo-0, we can see that their MIG scores are lower compared with contextualized representations, especially for syntax.",
"Previous work (Hewitt and Manning, 2019) has drawn similar conclusions.",
"While for the semantic graphs, the gap is not significant.",
"In this section, we show how we can use the Worm's Eye probe to understand if the contextualized representations capture localized linguistic information in the dependency parses such as POS information or relational dependency information.",
"As described before, we design various perturbation experiments using our Worm's Eye probe.",
"For probing POS information or a dependency relation type, we add noise to the graph embeddings of the corresponding node(s).",
"After that, we calculate the MIL ratio (eq. 8) to show how much particular linguistic information (POS or relation type information) is contained in the word representations.",
"We repeat the experiment 20 times and use boxplots to present all the results.",
"First we use Worm's Eye to test for POS information, which is tagged as node labels in the dependency tree.",
"We select 5 POS tags: IN , NNP , DT , JJ , and NNS , which have high and roughly the same frequencies in the Penn Treebank dataset.",
"Complete statistics about the POS tag frequencies can be found in Appendix E. We ensure that the amount of perturbation of the graph embeddings is the same for each type.",
"Figure 3 presents the results.",
"We find that NNP achieves the highest MIL score, while NNS achieves the lowest.",
"This implies that BERT encodes syntactic information for singular proper nouns ( NNP ) and adjectives ( JJ ) more than plural nouns ( NNS ).",
"Next, we probe 5 types of universal dependency relations in the Penn Treebank dataset (PTB).",
"These are prep , pobj , det , nn and nsubj .",
"These 5 relations also roughly occur the same number of times in PTB.",
"Complete statistics about the number of occurences of these relation types can be found in Appendix E. Similarly, for each type of 0 1 2 3 4 5 # of MLP Hidden Layers 0.4 0.5 0.6 0.7 0.8 0.9 1.0 AUCS c o r e random embeddings GloVe ELMo0 BERT-base BERT-large Figure 5: AUC scores of predicting syntactic trees by various word representations.",
"relation, we add same amount of perturbation to graph embeddings of nodes connected by the specific relations.",
"Figure 4 shows the results, where nsubj relations have the lowest MIL score compared with other 4 types.",
"This means that BERT encodes more syntactic structure for prepositional modifiers ( prep ), object of a preposition ( pobj ), and noun compound modifier ( nn ) than nominal subject ( nsubj ).",
"Reif et al. (2019) have drawn similar conclusions while probing for dependency arc labels.",
"Similar experiment for semantic structure can be found in Appendix F. 4.4 On Accuracy-Based Probing In contrast to our information-theoretic approach to probing, we train a group of MLP models to probe entire and local structures in BERT-base.",
"We show that these probe results mainly depend on the model complexity rather than the structure itself.",
"Probing entire graph structures.",
"A group of MLPs are trained to predict entire syntactic and semantic structures with word representations.",
"Figure 5 and Figure 6 show the results.",
"Their trends are similar.",
"Shallow MLPs perform the worst and deep ones perform much better.",
"Previous work on structural probing (Hewitt and Manning, 2019) argues that powerful models could parse the word representations, thus a simple model should be designed.",
"However, in Table 1, we find that linear model even could not restore the graph by its embeddings.",
"Obviously, its performance cannot indicate how much structure information is included in the graph embeddings.",
"Thus, there is no reasonable principle to decide the complexity of the probe model.",
"Given this, designing and training a model is not suitable to probe entire structures.",
"A similar argument has been placed by previous works (Pimentel et al., 2020a,b; Lovering et al., 2021).",
"To prove that accuracies of probe models for localized structure also mainly depend on the model's complexity rather than the local structure, we train the group of MLPs to predict the entire syntactic structure by word representations, and calculate the AUC scores for each type of relations in test set as probing results.",
"Table 2 shows the AUC score of predicting specific type of relations.",
"For syntactic structure, same 5 types of relations are selected, and for semantic graphs, we select 3 groups of relations to probe: arg , general and op .",
"Complete statistics of AMR Bank are in Appendix E. From the results, we can find that for MLP models with different number of hidden layers, the ranks of AUC scores of relation prediction are quite different.",
"For both syntax trees and semantic graphs, there is no consistent interpretation of the results to conclude which types of relations are encoded in BERT.",
"We also run the experiment in the perturbation settings, which can be referred to Appendix G. Combining the results of probing with accuracy in Figure 5, Figure 6, and Table 2, we can find that the prediction decisions are not based purely on the structure but rather on spurious heuristics.",
"This has also been concluded and discussed in some recent works (Hewitt and Liang, 2019; Lovering et al., 2021).",
"Thus, training models is not feasible to probe structures.",
"For our probe methods, the randomness of models such as complexity is not an issue, since the one with highest estimation should be selected for tighter compression lemma bound 4 as introduced by Pimentel et al. (2020b).",
"Information-theoretic approaches sacrifice simplicity and efficiency to achieve reliable probing results compared to accuracy-based probes.",
"Even though our probes are quite simple, there are more hyperparameters that need to be selected by users compared to accuracy-based probes.",
"To help users implement our methods in their setting, we briefly describe some guiding principles to help them select hyperparameters, and point out several potential ways to make our probing approach more efficient.",
"Our probes are composed of two steps:",
"(a) computation of the graph embedding, and",
"(b) estimation of the mutual information.",
"The guiding principal in the graph embedding step is to retain as much linguistic graph information as possible.",
"In our experiments, we used default hyperparameters in DeepWalk (Perozzi et al., 2014) for simplicity.",
"Details can be found in Appendix A. However, users may use also use other graph embedding approaches that incorporate edge labels, etc. to improve our model.",
"As the mutual information estimation procedure is estimating a lower bound to the true mutual information, the guiding principle for hyperparameter selection in this step should be to let the MI estimation values be as large as possible.",
"In particular, model size is worth noting.",
"Deeper models can achieve a tighter lower bound.",
"However, these are less efficient than shallow ones.",
"Thus, the selection of MI estimator's complexity is a tradeoff.",
"According to our empirical experience, a relatively good choice is to use a two-layer MLPs.",
"More details can be found in Appendix D. Note that it might also be harder to achieve convergence with deeper models as training of MI estimators is notoriously difficult.",
"We leave a better exploration of this to future work.",
"Potential users might also resort to other solutions to make the probes more efficient.",
"If the bottleneck is in the graph embedding step, some fast approaches (Hamilton et al., 2017; Tang et al., 2015) can be chosen instead.",
"If the mutual information estimation step is the bottlenneck, some sampling strategies can be used.",
"A simple way is to sample a subset of the dataset, and optimize eq.",
"5 based on that subset.",
"Alternatively, potential users can use more sophisticated sampling strategies in training as in Recht and Re (2012).",
"These approaches achieve a much better convergence rate for MI estimation.",
"Syntax and Semantics Probing.",
"Many existing works probe language models directly or indirectly showing how much syntactic and semantic information is encoded in them.",
"Belinkov et al. (2017) tested NMT models and found that higher layers encode semantic information while lower layers perform better at POS tagging.",
"Similarly, Jawahar et al. (2019) tested various BERT layers and found that it encodes a rich hierarchy of linguistic information in the intermediate layers.",
"Tenney et al. (2019b); Wu et al. (2021) compared the syntactic and semantic information in BERT and its variants, and found that more syntactic information is encoded than semantic information.",
"Conneau et al. (2018) focused on probing various linguistic features with 10 different designed tasks.",
"Hewitt and Manning (2019) designed a tree distance and depth prediction task to probe syntax tree structures.",
"Information Theoretic Probe.",
"With the popularity of probe methods, limitations of previous methods have also been found.",
"Information theoretic methods have been proposed as an alternative.",
"To avoid the randomness of performance brought by the varying sizes of the probe models, Pimentel et al. (2020b) proposed an information-theoretic probe with control functions, which used mutual information instead of model performance for probing.",
"Voita and Titov (2020) restricted the probe model size by Minimum Description Length.",
"Training a model is recast as teaching it to effectively transmit the data.",
"Lovering et al. (2021) pointed out that if we train a model to probe, the decisions are often not based on information itself, but rather on spurious heuristics specific to the training set.",
"Mutual Information Estimation.",
"Mutual information estimation is a well-known difficult problem, especially when the feature vectors are in a high dimensional space (Chow and Huang, 2005; Peng et al., 2005).",
"There are many traditional ways to estimate MI, such as the wellknown histogram approach (Steuer et al., 2002; Paninski, 2003), density estimations using a kernel (Moon et al., 1995), and nearest-neighbor distance (Kraskov et al., 2004).",
"Belghazi et al. (2018) was recently proposed as a way to estimate MI using neural networks, which showed marked improvement over previous methods for feature vectors in high-dimensional space.",
"In this paper we propose a general information-theoretic probe method, which is capable of probing for linguistic graph structures and avoids the randomness of training a model.",
"In the experiments, we use our probe method to show the extent to which syntax trees and semantic graphs are encoded in pretrained BERT models.",
"Further, we perform a simple perturbation analysis to show that with small modifications, the probe can also be used to probe for specific linguistic sub-structures.",
"There are some limitations of our probe.",
"First, a graph embedding is used, and some structure information could be lost in this process.",
"We provide simple ways to test this.",
"Second, training a MI estimation model is difficult.",
"Future work can consider building on our framework by exploring better graph embedding and MI estimation techniques.",
"In recent years, deep learning approaches have been the main models for state-of-the-art systems in natural language processing.",
"However, understanding the decision making in these systems has been hard, and has challenges when these systems are used in human contexts.",
"Probing helps us gain interpretability and hence is useful in deploying these black-box models.",
"Our work introduces a simple and general way for understanding how linguistic properties represented as graph structures are encoded in large pretrained language models which are being applied to a wide range of structures in NLP.",
"The methodology and probing results can be helpful to the development of future NLP models.",
"While our model is not tuned for any specific real-world application domain, our methods could be used in sensitive contexts such as legal or healthcare settings, and it is essential that any work using our probe method undertake extensive quality-assurance and robustness testing before using it in their setting.",
"The datasets used in our work do not contain any sensitive information to the best of our knowledge.",
"We would like to thank reviewers for the constructive comments and providing suggestions for future work.",
"This work was funded by SNF project #201009."
] | [
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"other"
] |
[
"Nowadays, the interpretability of machine learning models is becoming increasingly important, especially in the medical domain.",
"Aiming to shed some light on how to rationalize medical relation prediction, we present a new interpretable framework inspired by existing theories on how human memory works, e.g., theories of recall and recognition.",
"Given the corpus-level statistics, i.e., a global co-occurrence graph of a clinical text corpus, to predict the relations between two entities, we first recall rich contexts associated with the target entities, and then recognize relational interactions between these contexts to form model rationales, which will contribute to the final prediction.",
"We conduct experiments on a real-world public clinical dataset and show that our framework can not only achieve competitive predictive performance against a comprehensive list of neural baseline models, but also present rationales to justify its prediction.",
"We further collaborate with medical experts deeply to verify the usefulness of our model rationales for clinical decision making 1 .",
"Predicting relations between entities from a text corpus is a crucial task in order to extract structured knowledge, which can empower a broad range of downstream tasks, e.g., question answering (Xu et al., 2016), dialogue systems (Lowe et al., 2015), reasoning (Das et al., 2017), etc.",
"There has been a large amount of existing work focusing on predicting relations based on raw texts (e.g., sentences, paragraphs) mentioning two entities (Hendrickx et al., 2010; Zeng et al., 2014; Zhou et al., 2016; Mintz et al., 2009; Riedel et al., 2010; Lin et al., 2016; Verga et al., 2018; Yao et al., 2019).",
"In this paper, we study a relatively new setting in which we predict relations between entities based on the global co-occurrence statistics aggregated from a text corpus, and focus on medical relations and clinical texts in Electronic Medical Records (EMRs).",
"The corpus-level statistics present a holistic graph view of all entities in the corpus, which will greatly facilitate the relation inference, and can better preserve patient privacy than raw or even de-identified textual content and are becoming a popular substitute for the latter in the research community for studying EMR data (Finlayson et al., 2014; Wang et al., 2019).",
"To predict relations between entities based on a global co-occurrence graph, intuitively, one can first optimize the graph embedding or global word embedding (Pennington et al., 2014; Perozzi et al., 2014; Tang et al., 2015), and then develop a relation classifier (Nickel et al., 2011; Socher et al., 2013; Yang et al., 2015; Wang et al., 2018) based on the embedding vectors of the two entities.",
"However, such kind of neural frameworks often lack the desired interpretability , which is especially important for the medical domain.",
"In general, despite 8079 their superior predictive performance in many NLP tasks, the opaque decision-making process of neural models has concerned their adoption in high stakes domains like medicine, finance, and judi-ciary (Rudin, 2019; Murdoch et al., 2019).",
"Building models that provide reasonable explanations and have increased transparency can remarkably enhance user trust (Ribeiro et al., 2016; Miller, 2019).",
"In this paper, we aim to develop such a model for our medical relation prediction task.",
"To start with, we draw inspiration from the existing theories on cognitive processes about how human memory works, e.g., two types of memory retrieval (recall and recognition) (Gillund and Shiffrin, 1984).",
"Basically, in the recall process, humans tend to retrieve contextual associations from long-term memory.",
"For example, given the word Paris, one may think of Eiffel Tower or France, which are strongly associated with Paris (Nobel and Shiffrin, 2001; Kahana et al., 2008; Budiu, 2014).",
"Besides, there is a strong correlation between the association strength and the co-occurrence graph (Spence and Owens, 1990; Lundberg and Lee, 2017).",
"In the recognition process, humans typically recognize if they have seen a certain piece of information before.",
"Figure 1 shows an example in the context of relation prediction.",
"Assume a model is to predict whether Aspirin may treat Headache or not (That Aspirin may treat Headache is a known fact, and we choose this relation triple for illustration purposes).",
"It is desirable if the model could perform the aforementioned two types of memory processes and produce rationales to base its prediction upon: (1) Recall.",
"What entities are associated with Aspirin ?",
"What entities are associated with Headache ?",
"(2) Recognition.",
"Do those associated entities hold certain relations, which can be leveraged as clues to predict the target relation?",
"For instance, a model could first retrieve a relevant entity Pain Relief for the tail entity Headache as they co-occur frequently, and then recognize there is a chance that Aspirin can lead to Pain Relief (i.e., formulate model rationales or as-sumptions), based on which it could finally make a correct prediction ( Aspirin , may treat, Headache ).",
"Now we formalize such intuition to rationalize the relation prediction task.",
"Our framework consists of three stages, global association recall (CogStage-1), assumption formation and representation (CogStage-2), and prediction decision making (CogStage-3), shown in Figure 2. CogStage-1 Associations Entity Pair RecallMemory RecognitionMemory Pred.",
"models the process of recalling diverse contextual entities associated with the target head and tail entities respectively, CogStage-2 models the process of recognizing possible interactions between those recalled entities, which serve as model rationales (or, assumptions 2 ) and are represented as semantic vectors, and finally CogStage-3 aggregates all assumptions to infer the target relation.",
"We jointly optimize all three stages using a training set of relation triples as well as the co-occurrence graph.",
"Model rationales can be captured through this process without any gold rationales available as direct supervision.",
"Overall, our framework rationalizes its relation prediction and is interpretable to users 3 by providing justifications for",
"(i) why a particular prediction is made,",
"(ii) how the assumptions of the prediction are developed, and",
"(iii) how the particular assumptions are relied on.",
"On a real-life clinical text corpus, we compare our framework with various competitive methods to evaluate the predictive performance and interpretability.",
"We show that our method obtains very competitive performance compared with a comprehensive list of various neural baseline models.",
"Moreover, we follow recent work (Singh et al., 2019; Jin et al., 2020) to quantitatively evaluate model interpretability and demonstrate that rationales produced by our framework can greatly help earn expert trust.",
"To summarize, we study the important problem of rationalizing medical relation prediction based on corpus-level statistics and propose a new framework inspired by cognitive theories, which outperforms competitive baselines in terms of both interpretability and predictive performance.",
"Different from existing work using raw texts for relation extraction, we assume a global co-occurrence graph (i.e., corpus-level statistics) is given, which was pre-constructed based on a text corpus D , and denote it as an undirected graph G = ( V , E ) , where",
"2 We use the two terms interchangeably in this paper.",
"3 Following Murdoch et al. (2019), desired interpretability is supposed to provide insights to particular audiences, which in our case are medical experts.",
"each vertex v V represents an entity extracted from the corpus and each edge e E is associated with the global co-occurrence count for the connected nodes.",
"Counts reflect how frequent two entities appear in the same context (e.g., co-occur in the same sentence, document, or a certain time frame).",
"In this paper, we focus on clinical co-occurrence graph in which vertices are medical terms extracted from clinical notes.",
"Nevertheless, as we will see later, our framework is very general and can be applied to other relations with corpus-level statistics.",
"Our motivation for working under this setting lies in three folds: (1) Such graph data is stripped of raw textual contexts and thus, has a better preserving of patient privacy (Wang et al., 2019), which makes itself easier to be constructed and shared under the HIPPA protected environments (Act, 1996) for medical institutes (Finlayson et al., 2014); (2) Compared with open-domain relation extraction, entities holding a medical relation oftentimes do not co-occur in a local context (e.g., a sentence or paragraph).",
"For instance, we observe that in a widely used clinical co-occurrence graph (Fin-layson et al., 2014), which is also employed for our experiments later, of all entity pairs holding the treatment relation according to UMLS (Uni-fied Medical Language System), only about 11.4% have a co-occurrence link (i.e., co-occur in clinical notes within a time frame like 1 day or 7 days); (3) As suggested by cognitive theories (Spence and Owens, 1990), lexical co-occurrence is significantly correlated with association strength in the recall memory process, which further inspires us to utilize such statistics to find associations and form model rationales for relation prediction.",
"Finally, our relation prediction task is formulated as: Given the global statistics G and an entity pair, we predict whether they hold a relation r (e.g., MAY TREAT ), and moreover provide a set of model rationales T composed of relation triples for the prediction.",
"For the example in Figure 1, we aim to build a model that will not only accurately predict the MAY TREAT relation, but also provide meaningful rationales on how the prediction is made, which are crucial for gaining trust from clinicians.",
"Following a high-level framework illustration in Figure 2, we show a more detailed overview in Figure 3 and introduce each component as follows.",
"3.1 CogStage-1: Global Association Recall Existing cognitive theories (Kahana et al., 2008) suggest that recall is an essential function of human memory to retrieve associations for later decision making.",
"On the other hand, the association has been shown to significantly correlate with the lexical co-occurrence from the text corpus (Spence and Owens, 1990; Lund and Burgess, 1996).",
"Inspired by such theories and correlation, we explicitly build up our model based on recalled associations stemming from corpus-level statistics and provide global highly-associated contexts as the source of interpretations.",
"Given an entity, we build an estimation module to globally infer associations based on the corpus-level statistics.",
"Our module leverages distributional learning to fully explore the graph structure.",
"One can also directly utilize the raw neighborhoods in the co-occurrence graph, but due to the noise introduced in the preprocessing of building the graph, it is a less optimal choice in real practice.",
"Specifically, for a selected node/entity e i E , our global association recall module estimates a conditional probability p ( e j | e i ) , representing how likely the entity e j E is associated with e i 4 .",
"We formally define such conditional probability as: p ( e j | e i ) = exp ( (cid:48) Te j e i ) (cid:80) |V| k =1 exp ( (cid:48) Te k e i ) (1) 4 We assume all existing entities can be possible associations for the given entity.",
"where e i R d is the embedding vector of node (cid:48) R d",
"e i and e j is the context embedding for e j .",
"There are many ways to approximate p ( e j | e i ) from the global statistics, e.g., using global log-bilinear regression (Pennington et al., 2014).",
"To estimate such probabilities and update entity embeddings efficiently, we optimize the conditional distribution p ( e j | e i ) to be close to the empirical distribution p ( e j | e i ) defined as: p ( e j | e i ) = p ij (cid:80) ( i,k ) E p ik (2) where E is the set of edges in the co-occurrence graph and p ij is the PPMI value calculated by the co-occurrence counts between node e i and e j .",
"We adopt the cross entropy loss for the optimization: L n = (cid:88) ( e i ,e j ) V p ( e j | e i ) log ( p ( e j | e i )) (3) This association recall module will be jointly trained with other objective functions to be introduced in the following sections.",
"After that, given an entity e i , we can select the topN c entities from p ( | e i ) as e i 's associative entities for subsequent assumption formation.",
"As shown in Figure 3, with the associative entities from CogStage-1, we are ready to formulate and represent assumptions.",
"In this paper, we define model assumptions as relational interactions between associations , that is, as shown in Figure 1, the model may identify ( Caffeine , MAY TREAT , Migraine ) as an assumption, which could help predict Aspirin may treat Headache ( Caffeine and Migraine are associations for Aspirin and Headache respectively).",
"Such relational rationales are more concrete and much easier for humans to understand than the widely-adopted explanation strategy (Yang et al., 2016; Mullenbach et al., 2018; Vashishth et al., 2019) in NLP that is based on pure attention weights on local contexts.",
"One straightway way to obtain such rationales is to query existing medical knowledge bases (KBs), e.g., ( Caffeine , MAY TREAT , Migraine ) may exist in SNOMED CT 5 and can serve as a model rationale.",
"We refer to rationales acquired in this way as the Closed-World Assumption (CWA) (Reiter, 1981) setting since only KB-stored facts are considered and trusted in a closed world.",
"In contrast 5 https://www.snomed.org/ to the CWA rationales, considering the sparsity and incompleteness issues of KBs that are even more severe in the medical domain, we also propose the Open-World Assumptions (OWA) (Ceylan et al., 2016) setting to discover richer rationales by estimating all potential relations between associative entities based on a seed set of relation triples (which can be regarded as prior knowledge).",
"In general, the CWA rationales are relatively more accurate as each fact triple has been verified by the KB, but would have a low coverage of other possibly relevant rationales for the target prediction.",
"On the other hand, the OWA rationales are more comprehensive but could be noisy and less accurate, due to the probabilistic estimation procedure and the limited amount of prior knowledge.",
"However, as we will see, by aggregating all OWA rationales into the whole framework with an attention-based mechanism, we can select high-quality and most relevant rationales for prediction.",
"For the rest of the paper, by default we adopt the OWA setting in our framework and describe its details as follows.",
"Specifically, given a pair of head and tail entity, e h , e t V , let us denote their association sets as A ( e h ) = { a ih } N h i =1 and A ( e t ) = { a jt } N t j =1 , where N h , N t are the number of associative entities a h , a t to use.",
"Each entity has been assigned an embedding vector by the previous association recall module.",
"We first measure the probability of relations holding for the pair.",
"Given a ih A ( e h ) , a jt A ( e t ) and a relation r k R , we define a scoring function as Bordes et al. (2013) to estimate triple quality: s ijk = f ( a ih , r k , a jt ) = || a ih + k a jt || 1 (4) where a ih and a jt are embedding vectors, relations are parameterized by a relation matrix R RN r d and k is its k -th row vector.",
"Such a scoring function encourages larger value for correct triples.",
"Additionally, in order to filter unreliable estimations, we define an NA relation to represent other trivial relations or no relation as the score, s ij NA = f ( a ih , NA , a jt ) , which can be seen as a dynamic threshold to produce reasonable rationales.",
"Now we formulate OWA rationales by calculating the conditional probability of a relation given a pair of associations as follows (we save the superscript ij for space): p ( r k | a ih , a jt ) = exp ( s k ) (cid:80) s k s NA exp ( s k ) , s k > s NA 0 , s k s NA (5) 8082 For each association pair, ( a ih , a jt ) , we only form an assumption with a relation r k if r k is top ranked according to p ( r k | a ih , a jt ) .",
"6 To represent assumptions, we integrate all relation information per pair into a single vector representation.",
"Concretely, we calculate the assumption representation by treating p ( r k | a ih , a jt ) as weights for all relations as follows: a ij = ( a ih , a jt ; R ) = N r (cid:88) k (cid:48) =1 p ( r k (cid:48) | a ih , a jt ) k (cid:48) (6) Finally, we combine the entity vectors as well as the relation vector to get the final representation of assumptions for association pair ( a ih , a jt ) , where c i A ( e h ) and c j A ( e t ) : e ij = tanh ([ a ih ; a jt ; a ij ] W p + b p ) (7) where [ ; ] represents vector concatenation, W p R 3 d d p , b p R d p are the weight matrix and bias in a fully-connected network.",
"Analogical to human thinking, our decision making module aggregates all assumption representations and measures their accountability for the final prediction.",
"It learns a distribution over all assumptions and we select the ones with highest probabilities as model rationales.",
"More specifically, we define a scoring function g ( e ij ) to estimate the accountability based on the assumption representation e ij and normalize g ( e ij ) as: g ( e ij ) = v T tanh ( W a e ij + b a ) (8) p ij = exp( g ( e ij )) (cid:80) N h m =1 (cid:80) N t n =1 exp( g ( e mn )) (9) where W a , b a are the weight matrix and bias for the scoring function.",
"Then we get the weighted rationale representation as: r = ( e h , e t ) = N h (cid:88) i =1 N t (cid:88) j =1 p ij e ij (10) With the representation of weighted assumption information for the target pair ( e h , e t ) , we calculate the binary prediction probability for relation r as: p ( r | e h , e t ) = ( W r r + b r ) (11) where ( x ) = 1 / (1 + exp( x )) and W r , b r are model parameters.",
"6 We remove the target relation to predict if it exists in the assumption set.",
"Rationalizing relation prediction.",
"After fully training the entire model, to recover the most contributing assumptions for predicting the relation between the given target entities ( e h , e t ) , we compute the importance scores for all assumptions and select those most important ones as model rationales.",
"In particular, we multiply p ij (the weight for association pair ( a ih , a jt ) in Eqn.",
"9) with p ( r k | a ih , a jt ) (the probability of a relation given the pair ( a ih , a jt ) in Eqn.",
"5) to score the triple ( a ih , r k , a jt ) .",
"We rank all such triples for a ih A ( e h ) , a jt A ( e t ) , r k R and select the topK triples as model rationales for the final relation prediction.",
"We now describe how we train our model efficiently for multiple modules.",
"For relational learning to estimate the conditional probability p ( r k | a ih , a jt ) , we utilize training data as the seed set of triples for all relations as correct triples denoted as ( h, r, t ) P .",
"The scoring function in Eqn.",
"4 is expected to score higher for correct triples than the corrupted ones in which we denote N (? , r, t ) ( N ( t, r, ?) ) as the set of corrupted triples by replacing the head (tail) entity randomly.",
"Instead of using margin-based loss function, we adopt a more efficient training strategy from (Kadlec et al., 2017; Toutanova and Chen, 2015) with a negative log likelihood loss function as: L r = (cid:80) ( h,r,t ) P log p ( h | t, r ) (cid:80) ( h,r,t ) P log p ( t | h, r ) (12) where the conditional probability p ( h | t, r ) is defined as follows ( p ( t | h, r ) is defined similarly): p ( h | t, r ) = exp( f ( h, r, t )) (cid:80) h (cid:48) N (? ,r,t ) exp( f ( h (cid:48) , r, t )) (13) For our binary relation prediction task, we define a binary cross entropy loss function with Eqn.",
"11 as follows: L p = (cid:80) Mi =1 ( y i log ( p ( r | e ih , e it )) + (1 y i ) log (1 p ( r | e ih , e it ))) (14) where M is the number of samples, y i is the label showing whether e h , e t holds a certain relation.",
"The above three loss functions, i.e., L n for global association recall, L r for relational learning and L p for relation prediction, are all jointly optimized.",
"All three of them share the entity embeddings and L p will reuse the relation matrix from L r to conduct the rationale generation.",
"In this section, we first introduce our experimental setup, e.g, the corpus-level co-occurrence statistics and datasets used for our experiments, and then compare our model with a list of comprehensive competitive baselines in terms of predictive performance.",
"Moreover, we conduct expert evaluations as well as case studies to demonstrate the usefulness of our model rationales.",
"We directly adopt a publicly available medical co-occurrence graph for our experiments (Finlayson et al., 2014).",
"The graph was constructed in the following way: Finlayson et al. (2014) first used an efficient annotation tool (LePendu et al., 2012) to extract medical terms from 20 million clinical notes collected by Stanford Hospitals and Clinics, and then computed the co-occurrence counts of two terms based on their appearances in one patient's records within a certain time frame (e.g., 1 day, 7 days).",
"We experiment with their biggest dataset with the largest number of nodes (i.e., the per-bin 1-day graph here 7 ) so as to have sufficient training data.",
"The co-occurrence graph contains 52,804 nodes and 16,197,319 edges.",
"To obtain training labels for relation prediction, we utilize the mapping between medical terms and concepts provided by Finlayson et al. (2014).",
"To be specific, they mapped extracted terms to UMLS concepts with a high mapping accuracy by suppressing the least possible meanings of each term (see Finlayson et al. (2014) for more details).",
"We utilize such mappings to automatically collect relation labels from UMLS.",
"For term e a and e b that are respectively mapped to medical concept c A and c B , we find the relation between c A and c B in UMLS, which will be used as the label for e a and e b .",
"Following Wang and Fan (2014) that studied distant supervision in medical text and identified several crucial relations for clinical decision making, we select 5 important medical relations with no less than 1,000 relation triples in our dataset.",
"Each relation is mapped to UMLS semantic relations, e.g., relation CAUSES corresponds to cause of , induces , causative agent of in UMLS.",
"A full list of mapping is in the appendix.",
"We sample an equal number of negative pairs by randomly pairing head and tail entities with the correct argument types (Wang 7 https://datadryad.org/stash/dataset/ doi:10.5061/dryad.jp917 Med Relations Train Dev Test Symptom of 14,326 3,001 3,087 May treat 12,924 2,664 2,735 Contraindicates 10,593 2,237 2,197 May prevent 2,113 440 460 Causes 1,389 305 354 Total 41.3k 8.6k 8.8k Table 1: Dataset Statistics. et al., 2016).",
"We split all samples into train/dev/test sets with a ratio of 70/15/15.",
"Only relation triples in the training set are used to optimize relational parameters.",
"The statistics of the positive samples for relations are summarized in Table 1. 4.2 Predictive Performance Evaluation Compared Methods.",
"There are a number of advanced neural methods (Tang et al., 2015; Qu et al., 2018; Wang et al., 2018) that have been developed for the link prediction task, i.e., predicting the relation between two nodes in a co-occurrence graph.",
"At the high level, their frameworks comprise of an entity encoder and a relation scoring function.",
"We adapt various existing methods for both the encoder and the scoring functions for comprehensive comparison.",
"Specifically, given the co-occurrence graph, we employ existing distributional representation learning methods to learn entity embeddings.",
"With the entity embeddings as input features, we adapt various models from the knowledge base completion literature as a binary relation classifier.",
"More specifically, for the encoder, we select one word embedding method, Word2vec (Mikolov et al., 2013; Levy and Goldberg, 2014), two graph embedding methods, random-walk based DeepWalk (Perozzi et al., 2014), edge-sampling based LINE (Tang et al., 2015), and one distributional approach REPEL-D (Qu et al., 2018) for weakly-supervised relation extraction that leverages both the co-occurrence graph and training relation triples to learn entity representations.",
"For the scoring functions, we choose DistMult (Yang et al., 2015), RESCAL (Nickel et al., 2011) and NTN (Socher et al., 2013).",
"Note that one can apply more complex encoders or scoring functions to obtain higher predictive performance; however, in this work, we emphasize more on model interpretability than predictive performance, and unfortunately, all such frameworks are hard to interpret as they provide little or no 8084 Methods MAY TREATCONTRAIN .",
"We also show the predictive performance of our framework under the CWA setting in which the CWA rationales are existing triples in a closed knowledge base (i.e., UMLS).",
"We first adopt the pre-trained association recall module to retrieve associative contexts for head and tail entities, then formulate the assumptions using top-ranked triples (that exist in our relation training data), where the rank is based on the product of their retrieval probabilities ( p ij = p ( e i | e h ) p ( e j | e t ) ).",
"We keep the rest of our model the same as the OWA setting.",
"Results.",
"We compare the predictive performance of different models in terms of F1 score under each relation prediction task.",
"As shown in Table 2, our model obtains very competitive performance compared with a comprehensive list of baseline methods.",
"Specifically, on the prediction tasks of MAY TREAT and CONTRAINDICATES , our model achieves a substantial improvement (1 2 F1 score) and a very competitive performance on the task of SYMPTOM OF and MAY PREVENT .",
"The small amount of training data might partly explain why our model does not perform so well in the CAUSES tasks.",
"Such comparison shows the effectiveness of predicting relations based on associations and their relational interactions.",
"Moreover, compared with those baseline models which encode graph structure into latent vector representation, our model utilizes co-occurrence graph more explicitly by leveraging the associative contexts symbolically to generate human-understandable rationales, which can assist medical experts as we will see shortly.",
"In addition, we observe that our model consistently OWA Rationales CWA Rationales Ranking Score 17 5 Avg.",
"outperforms the CWA setting: Despite the CWA rationales are true statements on their own, they tend to have a low coverage of possible rationales, and thus, may be not so relevant for the target relation prediction, which leads to a poor predictive performance.",
"To measure the quality of our model rationales (i.e., OWA rationales), as well as to conduct an ablation study of our model, we conduct an expert evaluation for the OWA rationales and also compare them with the CWA rationales.",
"We first collaborate with a physician to explore how much a model's rationales help them better trust the model's prediction following recent work for evaluating model interpretability (Singh et al., 2019; Mullenbach et al., 2018; Atutxa et al., 2019; Jin et al., 2020).",
"Then, we present some case studies to show what kind of rationales our model has learnt.",
"Note that compared with evaluation by human annotators for open-domain tasks (without expertise requirement), evaluation by medical experts is more challenging in general.",
"The physician in our study (an M.D. with 9 years of clinical experience and currently a fellow trained in clinical informatics), who is able to understand the context of terms and the basics of the compared algorithms and can dedicate time, is qualified for our evaluation.",
"Expert Evaluation.",
"We first explained to the physician about the recall and recognition process in our framework and how model rationales are developed.",
"They endorsed such reasoning process as one possible way to gain their trust in the model.",
"Next, for each target pair for which our model correctly makes the prediction, they were shown the top-5 rationales produced by our framework and were asked whether each rationale helps them better trust the model prediction.",
"For each rationale, they were asked to score it from 0 to 3 in which 0 is no helpful , 1 is a little helpful , 2 is helpful and 3 is very helpful .",
"In addition to the individual rationale evaluation, we further compare the overall quality of CWA and OWA rationales, by letting experts rank them based the helpfulness of each set of rationales (the rationale set ranked higher gets 1 ranking score and both get 0 if they have the same rank).",
"We refer readers to the appendix for more details of the evaluation protocol.",
"We randomly select 30 cases in the MAY TREAT relation and the overall evaluation results are summarized in Table 3. Out of 30, OWA wins in 17 cases and gets higher scores on individual rationales per case on average.",
"There are 8 cases where the two sets of rationales are ranked the same 8 and 5 cases where CWA is better.",
"To get a better idea of how the OWA model obtains more trust, we calculate the average sum score per case, which shows the OWA model gets a higher overall score per case.",
"Considering in some cases only a few rationales are able to get non-zero scores, we also calculate the average max score per case, which shows that our OWA model generally provides one helpful rationale (score > 2) per case.",
"Overall, as we can see, the OWA rationales are more helpful to gain expert trust.",
"Case Study.",
"Table 4 shows two concrete examples demonstrating what kind of model rationales our framework bases its predictions on.",
"We highlight the rationales that receive high scores from the physician for being especially useful for trusting the prediction.",
"As we can see, our framework is able to make correct predictions based on reasonable rationales.",
"For instance, to predict that cephalosporine may treat bacterial infection, our model relies on the rationale that cefuroxime may treat infectious diseases.",
"We also note that not all rationales are clinically established facts or even make sense, due to the unsupervised rationale learning and the probabilistic assumption formation 8 Of which, 7 cases are indicated equally unhelpful.",
"process, which leaves space for future work to further improve the quality of rationales.",
"Nevertheless, such model rationales can provide valuable information or new insights for clinicians.",
"For another example, as pointed out by the physician, different medications possibly having the same treatment response, as shown in Case 2, could be clinically useful.",
"That is, if three medications are predicted to possibly treat the same condition and a physician is only aware of two doing so, one might get insights into trying the third one.",
"To summarize, our model is able to provide reasonable rationales and help users understand how model predictions are made in general.",
"Relation Extraction (RE) typically focuses on predicting relations between two entities based on their text mentions, and has been well studied in both open domain (Mintz et al., 2009; Zeng et al., 2015; Riedel et al., 2013; Lin et al., 2016; Song et al., 2019; Deng and Sun, 2019) and biomedical domain (Uzuner et al., 2011; Wang and Fan, 2014; Sahu et al., 2016; Lv et al., 2016; He et al., 2019).",
"Among them, most state-of-the-art work develops various powerful neural models by leveraging human annotations, linguistic patterns, distance supervision, etc.",
"More recently, an increasing amount of work has been proposed to improve model's transparency and interpretability.",
"For example, Lee et al. (2019) visualizes self-attention weights learned from BERT (Devlin et al., 2019) to explain relation prediction.",
"However, such text-based interpretable 8086 models tend to provide explanations within a local context (e.g., words in a single sentence mentioning target entities), which may not capture a holistic view of all entities and their relations stored in a text corpus.",
"We believe that such a holistic view is important for interpreting relations and can be provided to some degree by the global statistics from a text corpus.",
"Moreover, global statistics have been widely used in the clinical domain as they can better preserve patient privacy (Finlayson et al., 2014; Wang et al., 2019).",
"On the other hand, in recent years, graph embedding techniques (Perozzi et al., 2014; Tang et al., 2015; Grover and Leskovec, 2016; Yue et al., 2019) have been widely applied to learn node representations based on graph structure.",
"Representation learning based on global statistics from a text corpus (i.e., co-occurrence graph) has also been studied (Levy and Goldberg, 2014; Pennington et al., 2014).",
"After employing such methods to learn entity embeddings, a number of relation classifiers (Nickel et al., 2011; Bordes et al., 2013; Socher et al., 2013; Yang et al., 2015; Wang et al., 2018) can be adopted for relation prediction.",
"We compare our method with such frameworks to show its competitive predictive accuracy.",
"However, such frameworks tend to be difficult to interpret as they provide little or no explanations on how decisions are made.",
"In this paper, we focus more on model interpretability than predictive accuracy, and draw inspirations from existing cognitive theories of recall and recognition to develop a new framework, which is our core contribution.",
"Another line of research related to interpreting relation prediction is path-based knowledge graph (KG) reasoning (Gardner et al., 2014; Neelakantan et al., 2015; Guu et al., 2015; Xiong et al., 2017; Stadelmaier and Pado, 2019).",
"In particular, existing paths mined from millions of relational links in knowledge graphs can be used to provide justifications for relation predictions.",
"For example, to explain Microsoft and USA may hold the relation CountryOfHeadquarters , by traversing a KG, one can extract the path Microsoft IsBasedIn Seattle CountryLocatedIn USA as one explanation.",
"However, such path-finding methods typically require large-scale relational links to infer path patterns, and cannot be applied to our co-occurrence graph as the co-occurrence links are unlabeled.",
"In addition, our work is closely related to the area of rationalizing machine decision by generating justifications/rationales accounting for model's prediction.",
"In some scenarios, human rationales are provided as extra supervision for more explainable models (Zaidan et al., 2007; Bao et al., 2018).",
"However, due to the high cost of manual annotation, model rationales are desired to be learned in an unsupervised manner(Lei et al., 2016; Bouchacourt and Denoyer, 2019; Zhao et al., 2019).",
"For example, Lei et al. (2016) select a subset of words as rationales and Bouchacourt and Denoyer (2019) provide an explanation based on the absence or presence of concepts, where the selected words and concepts are learned unsupervisedly.",
"Different from text-based tasks, in this paper, we propose to rationalize relation prediction based on global co-occurrence statistics and similarly, model rationales in our work are captured without explicit manual annotation either, via a joint training framework.",
"In this paper, we propose an interpretable framework to rationalize medical relation prediction based on corpus-level statistics.",
"Our framework is inspired by existing cognitive theories on human memory recall and recognition, and can be easily understood by users as well as provide reasonable explanations to justify its prediction.",
"Essentially, it leverages corpus-level statistics to recall associative contexts and recognizes their relational connections as model rationales.",
"Compared with a comprehensive list of baseline models, our model obtains competitive predictive performances.",
"Moreover, we demonstrate its interpretability via expert evaluation and case studies.",
"Acknowledgments We thank Srinivasan Parthasarathy, Ping Zhang, Samuel Yang and Kaushik Mani for valuable discussions.",
"We also thank the anonymous reviewers for their hard work and constructive feedback.",
"This research was sponsored in part by the Patient-Centered Outcomes Research Institute Funding ME-2017C1-6413, the Army Research Office under cooperative agreements W911NF-17-1-0412, NSF Grant IIS1815674, and Ohio Supercomputer Center (Center, 1987).",
"The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S.Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein."
] | [
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"other",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"method",
"method",
"method",
"other",
"other",
"method",
"abstain",
"other",
"other",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"objective",
"objective",
"method",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Unsupervised neural machine translation (UNMT) that relies solely on massive monolingual corpora has achieved remarkable results in several translation tasks.",
"However, in real-world scenarios, massive monolingual corpora do not exist for some extremely low-resource languages such as Estonian, and UNMT systems usually perform poorly when there is not adequate training corpus for one language.",
"In this paper, we first define and analyze the unbalanced training data scenario for UNMT.",
"Based on this scenario, we propose UNMT self-training mechanisms to train a robust UNMT system and improve its performance in this case.",
"Experimental results on several language pairs show that the proposed methods substantially outperform conventional UNMT systems.",
"Recently, unsupervised neural machine translation (UNMT) that relies solely on massive monolingual corpora has attracted a high level of interest in the machine translation community (Artetxe et al., 2018; Lample et al., 2018a; Yang et al., 2018; Lample et al., 2018b; Wu et al., 2019; Sun et al., 2019, 2020b).",
"With the help of cross-lingual language model pretraining (Lample and Conneau, 2019; Song et al., 2019; Sun et al., 2020a), the denoising auto-encoder (Vincent et al., 2010), and back-translation (Sennrich et al., 2016a), UNMT has achieved remarkable results in several translation tasks.",
"However, in real-world scenarios, in contrast to the many large corpora available for high-resource languages such as English and French, massive monolingual corpora do not exist for some extremely low-resource languages such as Estonian.",
"The UNMT system usually performs poorly in a low-resource scenario when there is not an adequate training corpus for one language.",
"In this paper, we first define and analyze the unbalanced training data scenario for UNMT.",
"Based on this scenario, we propose a self-training mechanism for UNMT.",
"In detail, we propose self-training with unsupervised training (ST-UT) and self-training with pseudo-supervised training (ST-PT) strategies to train a robust UNMT system that performs better in this scenario.",
"To the best of our knowledge, this paper is the first work to explore the unbalanced training data scenario problem in UNMT.",
"Experimental results on several language pairs show that the proposed strategies substantially outperform conventional UNMT systems.",
"In this section, we first define the unbalanced training data scenario according to training data size.",
"Consider one monolingual corpus { X } in high-resource language L 1 and another monolingual corpus { Y } in low-resource language L 2 .",
"The data size of { X } and { Y } are denoted by | X | and | Y | , respectively.",
"In an unbalanced training data scenario, | X | is generally much larger than | Y | so that training data { X } is not fully utilized.",
"To investigate UNMT performance in an unbalanced training data scenario, we empirically chose English (En) French (Fr) as the language pair.",
"The detailed experimental settings for UNMT are given in Section 5.",
"We used a transformer based XLM toolkit and followed the settings of Lample and Conneau (2019).",
"We randomly extracted 2 million sentences for each language from all 50 million sentences in the En and Fr training corpora to create small corpora and simulate unbalanced training data scenarios.",
"Table 1 shows the UNMT performance for different training data sizes.",
"The performance with 25M training sentences for both French and English configuration is similar to the baseline (50M training sentences for both French and English configuration).",
"However, the UNMT performance decreased substantially (45 BLEU points) when the size of the training data decreased rapidly.",
"In the unbalanced training data scenario, when training data for one language was added, they were not fully utilized and only slightly improved the UNMT's BLEU score.",
"The performance (2M/50M) is similar with the UNMT system, configured 2M training sentences for both French and English.",
"In short, Table 1 demonstrates that the UNMT performance is bounded by the smaller monolingual corpus.",
"The UNMT model converges and even causes over-fitting in the low-resource language while the model in the high-resource language doesn't converge.",
"This observation motivates us to better use the larger monolingual corpus in the unbalanced training data scenario.",
"We first briefly describe the three components of the UNMT model (Lample and Conneau, 2019): cross-lingual language model pre-training, the denoising auto-encoder (Vincent et al., 2010), and back-translation (Sennrich et al., 2016a).",
"Cross-lingual language model pre-training provides a naive bilingual signal that enables the back-translation to generate pseudo-parallel corpora at the beginning of the training.",
"The denoising auto-encoder acts as a language model to improve translation quality by randomly performing local substitutions and word reorderings.",
"Generally, back-translation plays an important role in achieving unsupervised translation across two languages.",
"The pseudo-parallel sentence pairs produced by the model at the previous iteration are used to train the new translation model.",
"The general back-translation probability is optimized by maximizing L bt = EX P ( X ) EY PMU ( Y | X ) logP MU ( X | Y ) + EY P ( Y ) EX PMU ( X | Y ) logP MU ( Y | X ) , (1) where P ( X ) and P ( Y ) are the empirical data distribution from monolingual corpora { X } , { Y } , and PMU ( Y | X ) and PMU ( X | Y ) are the conditional distributions generated by the UNMT model.",
"In addition, MU denotes the model at the previous iteration for generating new pseudo-parallel sentence pairs to update the UNMT model.",
"Self-training proposed by Scudder (1965), is a semi-supervised approach that utilizes unannotated data to create better models.",
"Self-training has been successfully applied to many natural language processing tasks (Yarowsky, 1995; McClosky et al., 2006; Zhang and Zong, 2016; He et al., 2020).",
"Recently, He et al. (2020) empirically found that noisy self-training could improve the performance of supervised machine translation and synthetic data could play a positive role, even as a target.",
"Based on these previous empirical findings and analyses, we propose a self-training mechanism to generate synthetic training data for UNMT to alleviate poor performance in the unbalanced training data scenario.",
"The synthetic data increases the diversity of low-resource language data, further enhancing the performance of the translation, even though the synthetic data may be noisy.",
"As the UNMT model is trained, the quality of synthetic data becomes better, causing less and less noise.",
"Compared with the original UNMT model that the synthetic data is just used as the source part, we also use the synthetic data as the target part in our proposed methods.",
"Newly generated synthetic data, together with original monolingual data, are fully utilized to train a robust UNMT system in this scenario.",
"According to the usage of the generated synthetic training data, our approach can be divided into two strategies: ST-UT (Algorithm 1) and ST-PT (Algorithm 2).",
"ST-UT: In this strategy, we first train a UNMT model on the existing monolingual training data.",
"The final UNMT system is trained using the STUT strategy for k 1 epochs.",
"For one epoch l in the ST-UT strategy, a subset { X sub } is selected randomly from monolingual training data { X } .",
"The quantity of { X sub } is (cid:15) of | X | , (cid:15) is a quantity Algorithm 1 ST-UT strategy Input: Monolingual training data { X } , { Y } 1: Train a UNMT model MU 0 on monolingual training data { X } , { Y } 2: while epoch l max epoch k 1 do 3: Select a subset { X sub } randomly on monolingual training data { X } 4: Apply the last trained UNMT model M Ul 1 to this subset { X sub } to generate synthetic data { Y subM } = { M Ul 1 ( X sub ) } 5: Train a new UNMT model M Ul on monolingual data { X } , { Y } and synthetic data { Y subM } 6: end while Output: The final translation model M Uk 1 ratio hyper-parameter.",
"The last trained UNMT model M Ul 1 is used to generate synthetic data { Y subM } = { M Ul 1 ( X sub ) } .",
"The synthetic data are used 1 , together with the monolingual data to train a new UNMT model M Ul .",
"Therefore, the translation probability for the ST-UT strategy is optimized by maximizing L bt = EX P ( X ) EY PMU l ( Y | X ) logP M Ul ( X | Y ) + EY P ( Y ) EX PMU l ( X | Y ) logP M Ul ( Y | X ) + EY PMU l 1 ( Y | X ) EX PMU l ( X | Y ) logP M Ul ( Y | X ) , (2) where PM Ul ( Y | X ) and PM Ul ( X | Y ) are the conditional distribution generated by the UNMT model on epoch l for the ST-UT strategy and PMU l 1 ( Y | X ) is the conditional distribution generated by the UNMT model on epoch l 1 for the ST-UT strategy.",
"ST-PT: In this strategy, we first train a UNMT system on the existing monolingual training data and switch to a standard neural machine translation system from UNMT system with synthetic parallel data for both translation directions.",
"The final translation system is trained using the ST-PT strategy for k 2 epochs.",
"For one epoch q in the ST-PT strategy, a subset { X sub } is selected randomly from monolingual training data { X } , and all monolingual data { Y } is selected.",
"The quantity of { X sub } is (cid:15) of | X | , (cid:15) is a quantity ratio hyper-parameter.",
"The last trained pseudo-supervised neu-1 In contrast to using all synthetic data, we tried to train a language model and select more fluent synthetic data according to a language model perplexity score.",
"This did not improve translation performance.",
"Algorithm 2 ST-PT strategy Input: Monolingual training data { X } , { Y } 1: Train a UNMT model MU 0 on monolingual training data { X } , { Y } 2: while epoch q max epoch k 2 do 3: Select a subset { X sub } randomly on monolingual training data { X } and all monolingual training data { Y all } 4: Apply the last trained PNMT model M Pq 1 ( MP 0 = MU 0 ) to generate { Y subM } = { M Pq 1 ( X sub ) } and { X allM } = { M Pq 1 ( Y all ) } 5: Train a new PNMT model M Pq on synthetic parallel corpora { X sub , Y subM } and { Y all , X allM } 6: end while Output: The final translation model M Pk 2 ral machine translation (PNMT) model 2 M Pq 1 is used to generate { Y subM } = { M Pq 1 ( X sub ) } and { X allM } = { M Pq 1 ( Y all ) } to create synthetic parallel data { X sub , Y subM } and { Y all , X allM } .",
"Note that we use the UNMT model to generate synthetic parallel data during the first epoch of the ST-PT strategy.",
"Synthetic parallel data { X sub , Y subM } and { Y sub , X subM } are selected to train a new PNMT model M Pq that can generate translation in both directions.",
"Therefore, the translation probability for ST-PT strategy is optimized by maximizing L bt = EX P ( X ) EY PMP q 1 ( Y | X ) logP M Pq ( X | Y ) + EX P ( X ) EY PMP q 1 ( Y | X ) logP M Pq ( Y | X ) + EY P ( Y ) EX PMP q 1 ( X | Y ) logP M Pq ( Y | X ) + EY P ( Y ) EX PMP q 1 ( X | Y ) logP M Pq ( X | Y ) , (3) where PM Pq ( Y | X ) and PM Pq ( X | Y ) are the conditional distributions generated by the PNMT model on epoch q for the ST-PT strategy; PMP q 1 ( Y | X ) and PMP q 1 ( X | Y ) are the conditional distributions generated by the PNMT model on epoch q 1 for the ST-PT strategy.",
"We considered three language pairs in our simulation experiments: FrEn, Romanian (Ro)En and"
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain"
] |
[
"Human conversations naturally evolve around related concepts and scatter to multi-hop concepts.",
"This paper presents a new conversation generation model, ConceptFlow, which leverages commonsense knowledge graphs to explicitly model conversation flows.",
"By grounding conversations to the concept space, ConceptFlow represents the potential conversation flow as traverses in the concept space along commonsense relations.",
"The traverse is guided by graph attentions in the concept graph, moving towards more meaningful directions in the concept space, in order to generate more semantic and informative responses.",
"Experiments on Reddit conversations demonstrate ConceptFlow's effectiveness over previous knowledge-aware conversation models and GPT-2 based models while using 70% fewer parameters, confirming the advantage of explicit modeling conversation structures.",
"All source codes of this work are available at https://github.com/ thunlp/ConceptFlow .",
"The rapid advancements of language modeling and natural language generation (NLG) techniques have enabled fully data-driven conversation models, which directly generate natural language responses for conversations (Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016b).",
"However, it is a common problem that the generation models may degenerate dull and repetitive contents (Holtzman et al., 2019; Welleck et al., 2019), which, in conversation assistants, leads to off-topic and useless responses.",
"(Tang et al., 2019; Zhang et al., 2018; Gao et al., 2019).",
"Conversations often develop around Knowledge.",
"A promising way to address the degeneration probIndicates equal contribution.",
"lem is to ground conversations with external knowledge (Xing et al., 2017), such as open-domain knowledge graph (Ghazvininejad et al., 2018), commonsense knowledge base (Zhou et al., 2018a), or background documents (Zhou et al., 2018b).",
"Recent research leverages such external knowledge by using them to ground conversations, integrating them as additional representations, and then generating responses conditioned on both the texts and the grounded semantics (Ghazvininejad et al., 2018; Zhou et al., 2018a,b).",
"Integrating external knowledge as extra semantic representations and additional inputs to the conversation model effectively improves the quality of generated responses (Ghazvininejad et al., 2018; Logan et al., 2019; Zhou et al., 2018a).",
"Nevertheless, some research on discourse development suggests that human conversations are not still: People chat around a number of related concepts, and shift their focus from one concept to others.",
"Grosz and Sidner (1986) models such concept shift by breaking discourse into several segments, and demonstrating different concepts, such as objects and properties, are needed to interpret different discourse segments.",
"Attentional state is then introduced to represent the concept shift corresponding to each discourse segment.",
"Fang et al. (2018) shows that people may switch dialog topics entirely in a conversation.",
"Restricting the utilization of knowledge only to those directly appear in the conversation, effective as they are, does not reach the full potential of knowledge in modeling human conversations.",
"To model the concept shift in human conversations, this work presents ConceptFlow ( Con versation generation with Con cept Flow ), which leverages commonsense knowledge graphs to model the conversation flow in the explicit concept space.",
"For example, as shown in Figure 1, the concepts of a conversation from Reddit evolves from chat and future, to adjacent concept talk, and also hops to distant concept dream along the commonsense relationsa typical involvement in natural conversations.",
"To better capture this conversation structure, ConceptFlow explicitly models the conversations as traverses in commonsense knowledge graphs: it starts from the grounded concepts, e.g., chat and future, and generates more meaningful conversations by hopping along the commonsense relations to related concepts, e.g., talk and dream.",
"The traverses in the concept graph are guided by graph attention mechanisms, which derives from graph neural networks to attend on more appropriate concepts.",
"ConceptFlow learns to model the conversation development along more meaningful relations in the commonsense knowledge graph.",
"As a result, the model is able to grow the grounded concepts by hopping from the conversation utterances, along the commonsense relations, to distant but meaningful concepts; this guides the model to generate more informative and on-topic responses.",
"Modeling commonsense knowledge as concept flows, is both a good practice on improving response diversity by scattering current conversation focuses to other concepts (Chen et al., 2017), and an implementation solution of the attentional state mentioned above (Grosz and Sidner, 1986).",
"Our experiments on a Reddit conversation dataset with a commonsense knowledge graph, ConceptNet (Speer et al., 2017), demonstrate the effectiveness of ConceptFlow.",
"In both automatic and human evaluations, ConceptFlow significantly outperforms various seq2seq based generation models (Sutskever et al., 2014), as well as previous methods that also leverage commonsense knowledge graphs, but as static memories (Zhou et al., 2018a; Ghazvininejad et al., 2018; Zhu et al., 2017).",
"Notably, ConceptFlow also outperforms two fine-tuned GPT-2 systems (Radford et al., 2019), while using 70% fewer parameters.",
"Explicitly modeling conversation structure provides better parameter efficiency.",
"We also provide extensive analyses and case studies to investigate the advantage of modeling conversation flow in the concept space.",
"Our analyses show that many Reddit conversations are naturally aligned with the paths in the commonsense knowledge graph; incorporating distant concepts significantly improves the quality of generated responses with more on-topic semantic information added.",
"Our analyses further confirm the effectiveness of our graph attention mechanism in selecting useful concepts, and ConceptFlow's ability in leveraging them to generate more relevant, informative, and less repetitive responses.",
"Sequence-to-sequence models, e.g., Sutskever et al. (2014), have been widely used for natural language generation (NLG), and to build conversation systems (Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016b; Wu et al., 2019).",
"Recently, pre-trained language models, such as ELMO (Devlin et al., 2019), UniLM (Dong et al., 2019) and GPT-2 (Radford et al., 2018), further boost the NLG performance with large scale pretraining.",
"Nevertheless, the degenerating of irrelevant, off-topic, and non-useful responses is still one of the main challenges in conversational generation (Rosset et al., 2020; Tang et al., 2019; Zhang et al., 2018; Gao et al., 2019).",
"Recent work focuses on improving conversation generation with external knowledge, for example, incorporating additional texts (Ghazvininejad et al., 2018; Vougiouklis et al., 2016; Xu et al., 2017; Long et al., 2017), or knowledge graphs (Long et al., 2017; Ghazvininejad et al., 2018).",
"They have shown external knowledge effectively improves conversation response generation.",
"The structured knowledge graphs include rich semantics represented via entities and relations (Hayashi et al., 2019).",
"Lots of previous studies focus on task-targeted dialog systems based on domain-specific knowledge bases (Xu et al., 2017; Zhu et al., 2017; Gu et al., 2016).",
"To generate responses with a large-scale knowledge base, Zhou et al. (2018a) and Liu et al. (2018) utilize graph attention and knowledge diffusion to select knowledge semantics for utterance understanding and response generation.",
"Moon et al. (2019) focuses on the task of entity selection, and takes advantage of positive entities that appear in the golden response.",
"Different from previous research, ConceptFlow models the conversation flow explicitly with the commonsense knowledge graph and presents a novel attention mechanism on all concepts to guide the conversation flow in the latent concept space.",
"This section presents our Con versation generation model with latent Con cept Flow (ConceptFlow).",
"Our model grounds the conversation in the concept graph and traverses to distant concepts along commonsense relations to generate responses.",
"Given a user utterance X = { x 1 , ..., x m } with m words, conversation generation models often use an encoder-decoder architecture to generate a response Y = { y 1 , ..., y n } .",
"The encoder represents the user utterance X as a representation set H = { (cid:126)h 1 , ...,(cid:126)h m } .",
"This is often done by Gated Recurrent Units (GRU): (cid:126)h i = GRU ( (cid:126)h i 1 ,(cid:126)x i ) , (1) where the (cid:126)x i is the embedding of word x i .",
"The decoder generates t -th word in the response according to the previous t 1 generated words y <t = { y 1 , ..., y t 1 } and the user utterance X : P ( Y | X ) = n (cid:89) t =1 P ( y t | y <t , X ) .",
"(2) Then it minimizes the cross-entropy loss L and optimizes all parameters end-to-end: L = n (cid:88) t =1 CrossEntropy ( y t , y t ) , (3) where y t is the token from the golden response.",
"The architecture of ConceptFlow is shown in Figure 2.",
"ConceptFlow first constructs a concept graph G with central graph G central and outer graph G outer according to the distance (hops) from the grounded concepts (Sec. 3.2).",
"Then ConceptFlow encodes both central and outer concept flows in central graph G central and outer graph G outer , using graph neural networks and concept embedding (Sec. 3.3).",
"The decoder, presented in Section 3.4, leverages the encodings of concept flows and the utterance to generate words or concepts for responses.",
"ConceptFlow constructs a concept graph G as the knowledge for each conversation.",
"It starts from the grounded concepts (zero-hop concepts V 0 ), which appear in the conversation utterance and annotated by entity linking systems.",
"Then, ConceptFlow grows zero-hop concepts V 0 with one-hop concepts V 1 and two-hop concepts V 2 .",
"Concepts from V 0 and V 1 , as well as all relations between them, form the central concept graph G central , which is closely related to the current conversation topic.",
"Concepts in V 1 and V 2 and their connections form the outer graph G outer .",
"The constructed concept graph provides explicit semantics on how concepts related to commonsense knowledge.",
"ConceptFlow utilizes it to model the conversation and guide the response generation.",
"It starts from the user utterance, traversing through central graph G central , to outer graph G outer .",
"This is modeled by encoding the central and outer concept flows according to the user utterance.",
"Central Flow Encoding.",
"The central concept graph G central is encoded by a graph neural network that propagates information from user utterance H to the central concept graph.",
"Specifically, it encodes concept e i G central to representation (cid:126)g e i : (cid:126)g e i = GNN ( (cid:126)e i , G central , H ) , (4) where (cid:126)e i is the concept embedding of e i .",
"There is no restriction of which GNN model to use.",
"We choose Sun et al. (2018)'s GNN (GraftNet), which shows strong effectiveness in encoding knowledge graphs.",
"More details of GraftNet can be found in Appendix A.3.",
"Outer Flow Encoding.",
"The outer flow f e p , hopping from e p V 1 to its connected two-hop concept e k , is encoded to (cid:126)f e p by an attention mechanism: (cid:126)f e p = (cid:88) e k e k [ (cid:126)e p (cid:126)e k ] , (5) where (cid:126)e p and (cid:126)e k are embeddings for e p and e k , and are concatenated ( ).",
"The attention e k aggregates concept triple ( e p , r, e k ) to get (cid:126)f e p : e k = softmax (( w r (cid:126)r ) (cid:62) tanh( w h (cid:126)e p + w t (cid:126)e k )) , (6) where (cid:126)r is the relation embedding between the concept e p and its neighbor concept e k .",
"w r , w h and w t are trainable parameters.",
"It provides an efficient attention specifically focusing on the relations for multi-hop concepts.",
"To consider both user utterance and related information, the texts from the user utterance and the latent concept flows are incorporated by decoder using two components: 1) the context representation that combines their encodings (Sec. 3.4.1); 2) the conditioned generation of words and concepts from the context representations (Sec. 3.4.2).",
"time decoding with the encodings of the utterance and the latent concept flow.",
"Specifically, (cid:126)s t is calculated by updating the ( t 1 )-th step output representation (cid:126)s t 1 with the ( t 1 )-th step context representation (cid:126)c t 1 : (cid:126)s t = GRU ( (cid:126)s t 1 , [ (cid:126)c t 1 (cid:126)y t 1 ]) , (7) where (cid:126)y t 1 is the ( t 1 )-th step generated token y t 1 's embedding, and the context representation (cid:126)c t 1 concatenates the text-based representation (cid:126)c text t 1 and the concept-based representation (cid:126)c concept t 1 : (cid:126)c t 1 = FFN ([ (cid:126)c text t 1 (cid:126)c cpt t 1 ]) .",
"and attentions jt 1 on the utterance tokens:",
"(cid:126)c cpt t 1 = (cid:88) e i G central e i t 1 (cid:126)g e i (cid:88) f ep G outer ft 1 (cid:126)f e p .",
"(11)",
"The attention e i t 1 weights over central concept representations: e i t 1 = softmax ( (cid:126)s t 1 (cid:126)g e i ) , (12) and the attention ft 1 weights over outer flow representations: ft 1 = softmax ( (cid:126)s t 1 (cid:126)f e p ) .",
"(13) 3.4.2 Generating Tokens The t -th time output representation (cid:126)s t (Eq. 7) includes information from both the utterance text, the concepts with different hop steps, and the attentions upon them.",
"The decoder leverages (cid:126)s t to generate the t -th token to form more informative responses.",
"It first uses a gate to control the generation by choosing words ( = 0 ), central concepts ( V 0 , 1 , = 1 ) and outer concept set ( V 2 , = 2 ): = argmax { 0 , 1 , 2 } ( FFN ( (cid:126)s t )) , (14) The generation probabilities of word w , central concept e i , and outer concepts e k are calculated over the word vocabulary, central concept set V 0 , 1 , and outer concept set V 2 : y t softmax ( (cid:126)s t (cid:126)w ) , = 0 softmax ( (cid:126)s t (cid:126)g e i ) , = 1 softmax ( (cid:126)s t (cid:126)e k ) , = 2 , (15) where (cid:126)w is the word embedding for word w , (cid:126)g e i is the central concept representation for concept e i and (cid:126)e k is the two-hop concept e k 's embedding.",
"The training and prediction of ConceptFlow are conducted following standard conditional language models, i.e. using Eq.",
"15 in place of Eq.",
"2 and training it by the Cross-Entropy loss (Eq. 3).",
"Only ground truth responses are used in training and no additional annotation is required.",
"This section describes the dataset, evaluation metrics, baselines, and implementation details of our",
"experiments.",
"Dataset.",
"All experiments use the multi-hop extended conversation dataset based on a previous dataset which collects single-round dialogs from Reddit (Zhou et al., 2018a).",
"Our dataset contains 3,384,185 training pairs and 10,000 test pairs.",
"Preprocessed ConceptNet (Speer et al., 2017) is used as the knowledge graph, which contains 120,850 triples, 21,471 concepts and 44 relation types.",
"Evaluation Metrics.",
"A wide range of evaluation metrics are used to evaluate the quality of generated responses: PPL (Serban et al., 2016), Bleu (Papineni et al., 2002), Nist (Doddington, 2002), ROUGE (Lin, 2004) and Meteor (Lavie and Agarwal, 2007) are used for relevance and repetitiveness; Dist-1, Dist-2 and Ent-4 are used for diversity, which is same with the previous work (Li et al., 2016a; Zhang et al., 2018).",
"The metrics above are evaluated using the implementation from Galley et al. (2018).",
"Zhou et al. (2018a)'s concept PPL mainly focuses on concept grounded models and this metric is reported in Appendix A.1.",
"The Precision, Recall, and F1 scores are used to evaluate the quality of learned latent concept flow in predicting the golden concepts which appear in ground truth responses.",
"Baselines.",
"The six baselines compared come from three groups: standard Seq2Seq, knowledge-enhanced ones, and fine-tuned GPT-2 systems.",
"Seq2Seq (Sutskever et al., 2014) is the basic encoder-decoder for language generation.",
"Knowledge-enhanced baselines include MemNet (Ghazvininejad et al., 2018), CopyNet (Zhu et al., 2017) and CCM (Zhou et al., 2018a).",
"MemNet maintains a memory to store and read concepts.",
"CopyNet copies concepts for the response generation.",
"CCM (Zhou et al., 2018a) leverages a graph attention mechanism to model the central concepts.",
"These models mainly focus on the grounded concepts.",
"They do not explicitly model the conversation structures using multi-hop concepts.",
"GPT-2 (Radford et al., 2019), the pre-trained model that achieves the state-of-the-art in lots of language generation tasks, is also compared in our experiments.",
"We fine-tune the 124M GPT-2 in two ways: concatenate all conversations together and train it like a language model (GPT-2 lang ); extend the GPT-2 model with encode-decoder architecture and supervise with response data (GPT-2 conv ).",
"Implement Details.",
"The zero-hop concepts are initialized by matching the keywords in the post to concepts in ConceptNet, the same with CCM (Zhou et al., 2018a).",
"Then zero-hop concepts are extended to their neighbors to form the central concept graph.",
"The outer concepts contain a large amount of two-hop concepts with lots of noises.",
"To reduce the computational cost, we first train ConceptFlow (se-lect) with 10% random training data, and use the learned graph attention to select top 100 two-hop concepts over the whole dataset.",
"Then the standard train and test are conducted with the pruned graph.",
"More details of this filtering step can be found in Appendix A.4.",
"TransE (Bordes et al., 2013) embedding and Glove (Pennington et al., 2014) embedding are used to initialize the representation of concepts and words, respectively.",
"Adam optimizer with the learning rate of 0.0001 is used to train the model.",
"Five experiments are conducted to evaluate the generated responses from ConceptFlow and the effectiveness of the learned graph attention.",
"This experiment evaluates the generation quality of ConceptFlow automatically and manually.",
"Automatic Evaluation.",
"The quality of generated responses is evaluated with different metrics from three aspects: relevance, diversity, and novelty.",
"Table 1 and Table 2 show the results.",
"In Table 1, all evaluation metrics calculate the relevance between the generated response and the Model Bleu-4 Nist-4 Rouge-1 Rouge-2 Rouge-L Meteor PPL Seq2Seq 0.0098 1.1069 0.1441 0.0189 0.1146 0.0611 48.79 MemNet 0.0112 1.1977 0.1523 0.0215 0.1213 0.0632 47.38 CopyNet 0.0106 1.0788 0.1472 0.0211 0.1153 0.0610 43.28 CCM 0.0084 0.9095 0.1538 0.0211 0.1245 0.0630 42.91 GPT-2 (lang) 0.0162 1.0844 0.1321 0.0117 0.1046 0.0637 29.08 GPT-2 (conv) 0.0124 1.1763 0.1514 0.0222 0.1212 0.0629 24.55 ConceptFlow 0.0246 1.8329 0.2280 0.0469 0.1888 0.0942 29.90 Table 1: Relevance Between Generated and Golden Responses.",
"golden response.",
"ConceptFlow outperforms all baseline models by large margins.",
"The responses generated by ConceptFlow are more on-topic and match better with the ground truth responses.",
"In Table 2, Dist-1, Dist-2, and Ent-4 measure the word diversity of generated responses and the rest of metrics measure the novelty by comparing the generated response with the user utterance.",
"ConceptFlow has a good balance in generating novel and diverse responses.",
"GPT-2's responses are more diverse, perhaps due to its sampling mechanism during decoding, but are less novel and on-topic compared to those from ConceptFlow.",
"Human Evaluation.",
"The human evaluation focuses on two aspects: appropriateness and informativeness.",
"Both are important for conversation systems (Zhou et al., 2018a).",
"Appropriateness evaluates if the response is on-topic for the given utterance; informativeness evaluates systems' ability to provide new information instead of copying from the utterance (Zhou et al., 2018a).",
"All responses of sampled 100 cases are selected from four methods with better performances: CCM, GPT-2 (conv), ConceptFlow, and Golden Response.",
"The responses are scored from 1 to 4 by five judges (the higher the better).",
"Table 3 presents Average Score and Best@1 ratio from human judges.",
"The first is the mean of five judges; the latter calculates the fraction of judges that consider the corresponding response the best among four systems.",
"ConceptFlow outperforms all other models in all scenarios, while only using 30% of parameters compared to GPT-2.",
"This demonstrates the advantage of explicitly modeling conversation flow with structured semantics.",
"The agreement of human evaluation is tested to demonstrate the authenticity of evaluation results.",
"We first sample 100 cases randomly for our human evaluation.",
"Then the responses from four better conversation systems, CCM, GPT-2 (conv), ConceptFlow and Golden Responses, are provided with a random order.",
"A group of annotators are asked to score each response ranged from 1 to 4 according to the quality on two testing scenarios, appropriateness and informativeness.",
"All annotators have no clues about the source of generated responses.",
"The agreement of human evaluation for CCM, GPT-2 (conv) and ConceptFlow are presented in Table 4.",
"For each case, the response from ConceptFlow is compared to the responses from two baseline models, CCM and GPT-2 (conv).",
"The comparison result is divided into three categories: win, tie and loss.",
"Then the human evaluation agreement is calculated with Fleiss' Kappa ( ).",
"The value ranges from 0.21 to 0.40 indicating fair agreement, which confirms the quality of human evaluation.",
"Both automatic and human evaluations illustrate the effectiveness of ConceptFlow.",
"The next experiment further studies the effectiveness of multi-hop concepts in ConceptFlow.",
"This part explores the role of multi-hop concepts in ConceptFlow.",
"As shown in Figure 3, three experiments are conducted to evaluate the performances of concept selection and the quality of generated responses with different sets of concepts.",
"This experiment considers four variations of outer concept selections.",
"Base ignores two-hop concepts and only considers the central concepts.",
"Rand , Distract , and Full add two-hop concepts in three different ways: Rand selects concepts randomly, Distract selects all concepts that appear in the golden response with random negatives (dis-tractors), and Full is our ConceptFlow (select) that selects concepts by learned graph attentions.",
"As shown in Figure",
"3(a), Full covers more golden concepts than Base .",
"This aligns with our motivation that natural conversations do flow from central concepts to multi-hop ones.",
"Compared to Distract setting where all ground truth two-hop concepts are added, ConceptFlow (select) has slightly less coverage but significantly reduces the number of two-hop concepts.",
"The second experiment studies the model's ability to generate ground truth concepts, by comparing the concepts in generated responses with those in ground truth responses.",
"As shown in Figure",
"3(b), though Full filtered out some golden two-Depth Amount Golden Coverage Ratio Number Zero-hop 5.8 9.81% 0.579 + One-hop 98.6 38.78% 2.292 + Two-hop 880.8 61.37% 3.627 + Three-hop 3769.1 81.58% 4.821 ConceptFlow 198.6 52.10% 3.075 Table 5: Statistics of Concept Graphs with different hops, including the total Amount of connected concepts, the Ratio and Number of covered golden concepts (those appear in ground truth responses).",
"hop concepts, it outperforms other variations by large margins.",
"This shows ConceptFlow's graph attention mechanisms effectively leverage the pruned concept graph and generate high-quality concepts when decoding.",
"The high-quality latent concept flow leads to better modeling of conversations, as shown in Figure",
"3(c).",
"Full outperforms Distract in their generated responses' token level perplexity, even though Distract includes all ground truth two-hop concepts.",
"This shows that negatives selected by ConceptFlow, while not directly appear in the target response, are also on-topic and include meaningful information, as they are selected by graph attentions instead of random.",
"More studies of multi-hop concept selection strategies can be found in Appendix A.2.",
"As shown in Table 5, the Number of covered golden concepts increases with more hops.",
"Compared to zero-hop concepts, multi-hop concepts cover more golden concepts, confirming that conversations naturally shift to multi-hop concepts: extending the concept graph from one-hop to two-hop improves the recall from 39% to 61%, and to three-hop further improves to 81%.",
"However, at the same time, the amounts of the concepts also increase dramatically with multiple hops.",
"Three hops lead to 3,769 concepts on average, which are 10% of the entire graph we used.",
"In this work, we choose two-hop, as a good balance of coverage and efficiency, and used ConceptFlow (select) to filter around 200 concepts to construct the pruned graph.",
"How to efficiently and effectively leverage more distant concepts in the graph is reserved for future work.",
"Some cases from three conversation models are listed in Table 6.",
"Responses from CCM may repeat the same contents as it does not explicitly model the traverse in the concept space.",
"For example, the responses from the first and third cases always repeat I'm not sure.",
"On the other hand, GPT-2 generates more fluent responses compared to CCM.",
"Nevertheless, some cases from GPT-2 merely copy contents or concepts from the given post.",
"For example, for the third case, GPT-2 (conv) mainly discusses the concept music.",
"In comparison, the generated responses from our ConceptFlow are more fluent and informative than those from both CCM and GPT-2.",
"For example, in the third case, ConceptFlow brings associated concepts sound and check to the response generation, hopping from the grounded concepts mu-sic and advice.",
"Introducing these multi-hop concepts effectively improves the informativeness and diversity of generated responses.",
"Figure 4 presents a case study of ConceptFlow.",
"The attention score e i and f are presented in the form of color intensity.",
"The championship of zero-hop, fan of one-hop and team of two-hop receive more attention than others and are used to #1 Post actually i stayed at the building right next to the lighthouse .",
"generate the response.",
"The concept flow from fans to fan models the concept shift from user post to response.",
"The concept flow from fan to team further describes the concept shift in response generation.",
"In addition, some concepts, such as win and pretty, share higher attention and may help to understand the one-hop concepts, and are filtered out when generating response by the gate according to the relevance with conversation topic.",
"This experiment studies the learned attention of ConceptFlow on different groups of concepts.",
"We consider the average attention score ( for central concepts and (Appendix A.4) for two-hop concepts) from all decoding steps.",
"The probability density of the attention is plotted in Figure 5.",
"Figure",
"5(a) shows the attention weights on central concepts.",
"ConceptFlow effectively attends more on golden and zero-hop concepts, which include more useful information.",
"The attention on two-hop concepts are shown in Figure",
"5(b).",
"ConceptFlow attends slightly more on the Golden two-hop concepts than the rest two-hop ones, though the margin is smallerthe two-hop concepts are already filtered down to high-quality ones in the ConceptFlow (select) step.",
"ConceptFlow models conversation structure explicitly as transitions in the latent concept space, in order to generate more informative and meaningful responses.",
"Our experiments on Reddit conversations illustrate the advantages of ConceptFlow over previous conversational systems.",
"Our studies confirm that ConceptFlow's advantages come from the high coverage latent concept flow, as well as its graph attention mechanism that effectively guides the flow to highly related concepts.",
"Our human evaluation demonstrates that ConceptFlow generates more appropriate and informative responses while using much fewer parameters.",
"In future, we plan to explore how to combine knowledge with pre-trained language models, e.g. GPT-2, and how to effectively and efficiently introduce more concepts in generation models.",
"Houyu Zhang, Zhenghao Liu and Zhiyuan Liu is supported by the National Key Research and Development Program of China (No. 2018YFB1004503) and the National Natural Science Foundation of China (NSFC No. 61772302, 61532010).",
"We thank Hongyan Wang, Shuo Wang, Kaitao Zhang, Si Sun, Huimin Chen, Xuancheng Huang, Zeyun Zhang, Zhenghao Liu and Houyu Zhang for human evaluations."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"other",
"other"
] |
[
"The rise of online communication platforms has been accompanied by some undesirable effects, such as the proliferation of aggressive and abusive behaviour online.",
"Aiming to tackle this problem, the natural language processing (NLP) community has experimented with a range of techniques for abuse detection.",
"While achieving substantial success, these methods have so far only focused on modelling the linguistic properties of the comments and the online communities of users, disregarding the emotional state of the users and how this might affect their language.",
"The latter is, however, inextricably linked to abusive behaviour.",
"In this paper, we present the first joint model of emotion and abusive language detection, experimenting in a multi-task learning framework that allows one task to inform the other.",
"Our results demonstrate that incorporating affective features leads to significant improvements in abuse detection performance across datasets.",
"Aggressive and abusive behaviour online can lead to severe psychological consequences for its victims (Munro, 2011).",
"This stresses the need for automated techniques for abusive language detection, a problem that has recently gained a great deal of interest in the natural language processing community.",
"The term abuse refers collectively to all forms of expression that vilify or offend an individual or a group, including racism, sexism, personal attacks, harassment, cyber-bullying , and many others.",
"Much of the recent research has focused on detecting explicit abuse, that comes in the form of expletives, derogatory words or threats, with substantial success (Mishra et al., 2019b).",
"However, abuse can also be expressed in more implicit and subtle ways, for instance, through the use of ambiguous terms and figurative language, which has proved more challenging to identify.",
"The NLP community has experimented with a range of techniques for abuse detection, such as recurrent and convolutional neural networks (Pavlopoulos et al., 2017; Park and Fung, 2017; Wang, 2018), character-based models (Nobata et al., 2016) and graph-based learning methods (Mishra et al., 2018a; Aglionby et al., 2019; Mishra et al., 2019a), obtaining promising results.",
"However, all of the existing approaches have focused on modelling the linguistic properties of the comments or the meta-data about the users.",
"On the other hand, abusive language and behaviour are also inextricably linked to the emotional and psychological state of the speaker (Patrick, 1901), which is re-flected in the affective characteristics of their language (Mabry, 1974).",
"In this paper, we propose to model these two phenomena jointly and present the first abusive language detection method that incorporates affective features via a multitask learning (MTL) paradigm.",
"MTL (Caruana, 1997) allows two or more tasks to be learned jointly, thus sharing information and features between the tasks.",
"In this paper, our main focus is on abuse detection; hence we refer to it as the primary task , while the task that is used to provide additional knowledge emotion detection is referred to as the auxiliary task .",
"We propose an MTL framework where a single model can be trained to perform emotion detection and identify abuse at the same time.",
"We expect that affective features, which result from a joint learning setup through shared parameters, will encompass the emotional content of a comment that is likely to be predictive of potential abuse.",
"We propose and evaluate different MTL architectures.",
"We first experiment with hard parameter sharing, where the same encoder is shared between the tasks.",
"We then introduce two variants of the MTL model to relax the hard sharing constraint and further facilitate positive transfer.",
"Our results demonstrate that the MTL models significantly outperform single-task learning (STL) in two different abuse detection datasets.",
"This confirms our hypothesis of the importance of affective features for abuse detection.",
"Furthermore, we compare the performance of MTL to a transfer learning baseline and demonstrate that MTL provides significant improvements over transfer learning.",
"Techniques for abuse detection have gone through several stages of development, starting with extensive manual feature engineering and then turning to deep learning.",
"Early approaches experimented with lexicon-based features (Gitari et al., 2015), bag-of-words ( BOW ) or n-gram features (Sood et al., 2012; Dinakar et al., 2011), and user-specific features, such as age (Dadvar et al., 2013) and gender (Waseem and Hovy, 2016).",
"With the advent of deep learning, the trend shifted, with abundant work focusing on neural architectures for abuse detection.",
"In particular, the use of convolutional neural networks ( CNN s) for detecting abuse has shown promising results (Park and Fung, 2017; Wang, 2018).",
"This can be attributed to the fact that CNN s are well suited to extract local and position-invariant features (Yin et al., 2017).",
"Character-level features have also been shown to be beneficial in tackling the issue of Out-of-Vocabulary (OOV) words (Mishra et al., 2018b), since abusive comments tend to contain obfuscated words.",
"Recently, approaches to abuse detection have moved towards more complex models that utilize auxiliary knowledge in addition to the abuse-annotated data.",
"For instance, Mishra et al. (2018a, 2019a) used community-based author information as features in their classifiers with promising results.",
"Founta et al. (2019) used transfer learning to fine-tune features from the author metadata network to improve abuse detection.",
"MTL, introduced by Caruana (1997), has proven successful in many NLP problems, as illustrated in the MTL survey of Zhang and Yang (2017).",
"It is interesting to note that many of these problems are domain-independent tasks, such as part-of-speech tagging, chunking, named entity recognition, etc. (Collobert and Weston, 2008).",
"These tasks are not restricted to a particular dataset or domain, i.e., any text data can be annotated for the phenomena involved.",
"On the contrary, tasks such as abuse detection are domain-specific and restricted to a handful of datasets (typically focusing on online communication), therefore presenting a different challenge to MTL.",
"Much research on emotion detection cast the problem in a categorical framework, identifying specific classes of emotions and using e.g., Ek-man's model of six emotions (Ekman, 1992), namely anger, disgust, fear, happiness, sadness, surprise.",
"Other approaches adopt the Valence-Arousal-Dominance ( VAD ) model of emotion (Mehrabian, 1996), which represents polarity, degree of excitement, and degree of control, each taking a value from a range.",
"The community has experimented with a variety of computational techniques for emotion detection, including vector space modelling (Danisman and Alpkocak, 2008), machine learning classifiers (Perikos and Hatzilygeroudis, 2016) and deep learning methods (Zhang et al., 2018).",
"In their work, Zhang et al. (2018) take an MTL approach to emotion detection.",
"However, all the tasks they consider are emotion-related (annotated for either classification or emotion distribution prediction), and the results show improvements over single-task baselines.",
"Akhtar et al. (2018) use a multitask ensemble architecture to learn emotion, sentiment, and intensity prediction jointly and show that these tasks benefit each other, leading to improvements in performance.",
"To the best of our knowledge, there has not yet been an approach investigating emotion in the context of abuse detection.",
"The tasks in an MTL framework should be related in order to obtain positive transfer.",
"MTL models are sensitive to differences in the domain and distribution of data (Pan and Yang, 2009).",
"This affects the stability of training, which may deteriorate performance in comparison to an STL model (Zhang and Yang, 2017).",
"We experiment with abuse and emotion detection datasets 1 that are from the same data domain Twitter.",
"All of the datasets were subjected to the same pre-processing steps, namely lower-casing, mapping all mentions and URLs to a common token (i.e., MTN and URL ) and mapping hashtags to words.",
"1 We do not own any rights to the datasets (or the containing tweets).",
"In the event of one who wishes to attain any of the datasets, to avoid redistribution infringement, we request them to contact the authors/owners of the source of the datasets.",
"To ensure that the results are generalizable, we experiment with two different abuse detection datasets.",
"OffensEval 2019 ( OffensEval ) This dataset is from SemEval 2019 Task 6: OffensEval 2019 Identifying and Categorizing Offensive Language in Social Media (Zampieri et al., 2019a,b).",
"We focus on Subtask A, which involves offensive language identification.",
"It contains 13 , 240 annotated tweets, and each tweet is classified as to whether it is offensive ( 33% ) or not ( 67% ).",
"Those classified as offensive contain offensive language or targeted offense, which includes insults, threats, profane language and swear words.",
"The dataset was annotated using crowdsourcing, with gold labels assigned based on the agreement of three annotators.",
"Waseem and Hovy 2016 ( Waseem&Hovy ) This dataset was compiled by Waseem and Hovy (2016) by searching for commonly used slurs and expletives related to religious, sexual, gender and ethnic minorities.",
"The tweets were then annotated with one of three classes: racism , sexism or neither .",
"The annotations were subsequently checked through an expert review, which yielded an inter-annotator agreement of = 0 .",
"84 .",
"The dataset contains 16 , 907 TweetIDs and their corresponding annotation, out of which only 16 , 202 TweetIDs were retrieved due to users being reported or tweets having been taken down since it was first published in 2016.",
"The distribution of classes is: 1 , 939 ( 12% ) racism ; 3 , 148 ( 19 . 4% ) sexism ; and 11 , 115 ( 68 . 6% ) neither , which is comparable to the original distribution: ( 11 . 7% : 20 . 0% : 68 . 3% ).",
"It should be noted that racial or cultural biases may arise from annotating data using crowdsourcing, as pointed out by Sap et al. (2019).",
"The performance of the model depends on the data used for training, which in turn depends on the quality of the annotations and the experience level of the annotators.",
"However, the aim of our work is to investigate the relationship between emotion and abuse detection, which is likely to be independent of the biases that may exist in the annotations.",
"Emotion ( SemEval18 ) This dataset is from SemEval-2018 Task 1: Affect in Tweets (Moham-mad et al., 2018), and specifically from Subtask 5",
"which is a multilabel classification of 11 emotion labels that best represent the mental state of the author of a tweet.",
"The dataset consists of around 11 k tweets (training set: 6839 ; development set: 887 ; test set: 3260 ).",
"It contains the TweetID and 11 emotion labels ( anger , anticipation , disgust , fear , joy , love , optimism , pessimism , sadness , surprise , trust ) which take a binary value to indicate the presence or absence of the emotion.",
"The annotations were obtained for each tweet from at least 7 annotators and aggregated based on their agreement.",
"In this section, we describe our baseline models and then proceed by describing our proposed models for jointly learning to detect emotion and abuse.",
"As our baselines, we use different Single-Task Learning (STL) models that utilize abuse detection as the sole optimization objective.",
"The STL experiments are conducted for each primary-task dataset separately.",
"Each STL model takes as input a sequence of words { w 1 , w 2 , ..., w n } , which are initialized with k -dimensional vectors e from a pre-trained embedding space.",
"We experiment with two different architecture variants: Max Pooling and MLP classifier We refer to this baseline as STL maxpool + MLP .",
"In this baseline, a two-layered bidirectional Long Short-Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997) is applied to the embedding representations e of words in a post to get contextualized word representations { h 1 , h 2 , ..., h n } : h t = [ h t ; h t ] (1) with h t , h t R l and h t R 2 l , where l is the hidden dimensionality of the BiLSTM.",
"We then apply a max pooling operation over { h 1 , h 2 , ..., h n } : r (p)i = max i ( h 1 , h 2 , ..., h n ) (2) where r (p) R 2 l and where the superscript (p) is used to indicate that the representations correspond to the primary task.",
"This is followed by dropout (Srivastava et al., 2014) for regularization and a 2 -layered Multi-layer Perceptron (MLP) (Hinton, 1987): m 1(p) = BatchNorm ( tanh ( W l 1 r (p) )) (3) m 2(p) = tanh ( W l 2 m 1(p) ) (4) m (p)t = m 2(p)t (5) where W l 1 and W l 2 are the weight matrices of the 2 -layer MLP.",
"Dropout is applied to the output m (p) of the MLP, which is then followed by a linear output layer to get the unnormalized output o (p) .",
"For OffensEval , a sigmoid activation is then applied in order to make a binary prediction with respect to whether a post is offensive or not, while the network parameters are optimized to minimize the binary cross-entropy (BCE): LBCE = 1 NN (cid:88) i =1 y i log ( p ( y i ))+ (1 y i ) log (1 p ( y i )) (6) where N is the number of training examples, and y denotes the true and p ( y ) the predicted label.",
"For Waseem&Hovy , a log softmax activation is applied for multiclass classification, while the network parameters are optimized to minimize the categorical cross-entropy, that is, the negative log-likelihood (NLL) of the true labels: LNLL = 1 NN (cid:88) i =1 log ( p ( y i )) (7) BiLSTM and Attention classifier We refer to this model as STL BiLSTM + attn .",
"In this baseline (Figure 1; enclosed in the dotted boxes), rather than applying max pooling, we apply dropout to h which is then followed by a third BiLSTM layer and an attention mechanism: u (p)t = W a r (p)t (8) a (p)t = exp ( u (p)t ) (cid:80) t exp ( u (p)t ) (9) m (p) = (cid:88) t a (p)t r (p)t (10) where r (p) is the output of the third BiLSTM.",
"We then apply dropout to the output of the attention layer m (p) .",
"The remaining components, output layer and activation, are the same as the STL maxpool + MLP model.",
"Across the two STL baselines, we further experiment with two different input representations: 1) GloVe (G), where the input is projected through the GloVe embedding layer (Pennington et al., 2014); 2) GloVe+ELMo (G+E), where the input is first projected through the GloVe embedding layer and the ELMo embedding layer (Peters et al., 2018) separately, and then the final word representation e is obtained by concatenating the output of these two layers.",
"Given these input representations, we have a total of 4 different baseline models for abuse detection.",
"We use grid search to tune the hyperparameters of the baselines on the development sets of the primary task (i.e., abuse detection).",
"Our MTL approach uses two different optimization objectives: one for abuse detection and another for emotion detection.",
"The two objectives are weighted by a hyperparameter [( 1 ) for abuse detection and for emotion detection] that controls the importance we place on each task.",
"We experiment with different STL architectures for the auxiliary task and propose MTL models that contain two network branches one for the primary task and one for the auxiliary task connected by a shared encoder which is updated by both tasks alternately.",
"Hard Sharing Model This model architecture, referred to as MTL Hard , is inspired by Caruana (1997) and uses hard parameter sharing : it consists of a single encoder that is shared and updated by both tasks, followed by task-specific branches.",
"Figure 1 presents MTL Hard where the dotted box represents the STL BiLSTM + attn architecture that is specific to the abuse detection task.",
"In the righthand side branch corresponding to the auxiliary objective of detecting emotion we apply dropout to h before passing it to a third BiLSTM.",
"This is then followed by an attention mechanism to obtain m",
"(a) and then dropout is applied to it.",
"The superscript",
"(a) is used to indicate that these representations correspond to the auxiliary task.",
"Then, we obtain the unnormalized output o",
"(a) after passing m",
"(a) through a linear output layer with o",
"(a) R 11 ( 11 different emotions in SemEval18 ), which is then subjected to a sigmoid activation to obtain a prediction p ( y ) .",
"While the primary task on the left is optimized using either Equation 6 or 7 (de-pending on the dataset used), the auxiliary task is optimized to minimize binary cross-entropy.",
"Double Encoder Model This model architecture, referred to as MTL DEncoder , is an extension of the previous model that now has two BiLSTM encoders: a task-specific two-layered BiLSTM encoder for the primary task, and a shared two-layered BiLSTM encoder.",
"During each training step of the primary task, the input representation e for the primary task is passed through both encoders, which results in two contextualized word representations { h (p)1 , h (p)2 , ..., h (p)n } and { h (s)1 , h (s)2 , ..., h (s)n } , where superscript (s) is used to denote the representations that result from the shared encoder.",
"These are then summed (Figure 2, where both (p) and (s) are fixed and set to 1 ) and the output representation is passed through a third BiLSTM followed by an attention mechanism to get the post representation m (p) .",
"The rest of the components of the primary task branch, as well as the auxiliary task branch are the same as those in MTL Hard .",
"Gated Double Encoder Model This model architecture, referred to as MTL GatedDEncoder , is an extension of MTL DEncoder , but is different in the way we obtain the post representations m (p) .",
"Representations h (p) and h (s) are now merged using two learnable parameters (p) and (s) (where (p) + (s) = 1 .",
"0 ) to control the flow of information from the representations that result from the two encoders (Figure 2): (p) h (p) + (s) h (s) (11) The remaining architecture components of the primary task and auxiliary task branch are the same as for MTL DEncoder .",
"Hyperparameters We use pre-trained GloVe embeddings 2 with dimensionality 300 and pre-trained ELMo embeddings 3 with dimensionality 1024 .",
"Grid search is performed to determine the optimal hyperparameters.",
"We find an optimal value of = 0 .",
"1 that makes the updates for the auxiliary task 10 times less important.",
"The encoders consist of 2 stacked BiLSTMs with hidden size = 512 .",
"For all primary task datasets, the BiLSTM+Attention classifier and the 2 -layered MLP classifier have hidden size = 256 .",
"For the auxiliary task datasets, the BiLSTM+Attention classifier and the 2-layered MLP classifier have hidden size = 512 .",
"Dropout is set to 0 .",
"2 .",
"We use the Adam optimizer (Kingma and Ba, 2014) for all experiments.",
"All model weights are initialized using Xavier Initialization (Glorot and Bengio, 2010).",
"For MTL GatedDEncoder , (p) = 0 .",
"9 and (s) = 0 .",
"1 .",
"2 https://nlp.stanford.edu/projects/glove/ 3 https://allennlp.org/elmo STL model P R F1 G maxpool+MLP 76.35 73.34 74.24 BiLSTM+attn 77.34 72.77 73.97 G+E maxpool + MLP 77.19 72.73 73.95 BiLSTM+attn 77.40 73.27 74.40",
"Training All models are trained until convergence for both the primary and the auxiliary task, and early stopping is applied based on the performance on the validation set.",
"For MTL, we ensure that both the primary and the auxiliary task have completed at least 5 epochs of training.",
"The MTL training process involves randomly (with p = 0 . 5 ) alternating between the abuse detection and emotion detection training steps.",
"Each task has its own loss function, and in each of the corresponding task's training step, the model is optimized accordingly.",
"All experiments are run using stratified 10 fold cross-validation, and we use the paired t-test for significance testing.",
"We evaluate the models using Precision ( P ), Recall ( R ), and F1 ( F 1 ), and report the average macro scores across the 10 folds.",
"The STL experiments are conducted on the abuse detection datasets independently.",
"As mentioned in the STL section, we experiment with four different model configurations to select the best STL baseline.",
"Table 1a presents the evaluation results of the STL models trained and tested on the OffensEval dataset, and Table 1b on the Waseem and Hovy dataset.",
"The best results are highlighted in bold and are in line with the validation set results.",
"We select the best performing STL model configuration on each dataset and use it as part of the corresponding MTL architecture in the MTL experiments below.",
"In this section, we examine the effectiveness of the MTL models for the abuse detection task and explore the impact of using emotion detection as an auxiliary task.",
"We also compare the performance of our MTL models with that of a transfer learning approach.",
"Emotion detection as an auxiliary task In this experiment, we test whether incorporating emotion detection as an auxiliary task improves the performance of abuse detection.",
"Tables 2a and 2b show the results on OffensEval and Waseem and Hovy datasets ( indicates statistically significant results over the corresponding STL model).",
"Learning emotion and abuse detection jointly proved beneficial, with MTL models achieving statistically significant improvement in F1 using the Gated Double Encoder Model MTL GatedDEncoder ( p < 0 . 05 , using a paired t-test).",
"This suggests that affective features from the shared encoder benefit the abuse detection task.",
"MTL vs. transfer learning Transfer learning is an alternative to MTL that also allows us to transfer knowledge from one task to another.",
"This experiment aims to compare the effectiveness of MTL against transfer learning.",
"We selected the MTL model with the best performance in abuse detection and compared it against an identical model, but trained in a transfer learning setting.",
"In this setup, we first train the model on the emotion detection task until convergence and then proceed by fine-tuning it for the abuse detection task.",
"Table 3 presents the comparison between MTL and transfer learning, for which we use the same architecture and hyperparameter configuration as MTL.",
"We observe that MTL outperforms transfer learning and provides statistically significant ( p < 0 . 05) results on both OffensEval and Waseem and Hovy datasets.",
"Auxiliary task Our results show that emotion detection significantly improves abuse detection on both OffensEval and Waseem and Hovy datasets.",
"Table 4 presents examples of improvements in both datasets achieved by the MTL GatedDEncoder model, over the STL model.",
"In the examples, the highlighted words are emotion evocative words, which are also found in the SemEval2018 Emotion dataset.",
"As the emotion detection task encourages the model to learn to predict the emotion labels for the examples that contain these words, the word representations and encoder weights that are learned by the model encompass some affective knowledge.",
"Ultimately, this allows the MTL model to determine the affective nature of the example, which may help it to classify abuse more accurately.",
"It is also interesting to observe that a controversial person or topic may strongly influence the classification of the sample containing it.",
"For example, sentences referring to certain politicians may be classified as Offensive , regardless of the context.",
"An example instance of this can be found in Table 4.",
"4 The MTL model, however, classifies it correctly, which may be attributed to the excessive use of ! marks.",
"The latter is one of the most frequently used symbols in the SemEval2018 Emotion dataset, and it can encompass many emotions such as surprise , fear , etc., therefore, not being indicative of a particular type of emotion.",
"Such knowledge can be learned within the shared features of the MTL model.",
"4 We mask the name using the POLITICIAN tag.",
"MTL vs. transfer learning This experiment demonstrates that MTL achieves higher performance than transfer learning in a similar experimental setting.",
"The higher performance may be indicative of a more stable way of transferring knowledge, which leads to better generalization.",
"In the MTL framework, since the shared parameters are updated alternately, each task learns some knowledge that may be mutually beneficial to both related tasks, which leads to a shared representation that encompasses the knowledge of both tasks and hence is more generalized.",
"In contrast, in the case of transfer learning, the primary task fine-tunes the knowledge from the auxiliary task (i.e., in the form of pre-trained parameters) for its task objective and may be forgetting auxiliary task knowledge.",
"In this paper, we proposed a new approach to abuse detection, which takes advantage of the affective features to gain auxiliary knowledge through an MTL framework.",
"Our experiments demonstrate that MTL with emotion detection is beneficial for the abuse detection task in the Twitter domain.",
"The mutually beneficial relationship that exists between these two tasks opens new research avenues for improvement of abuse detection systems in other domains as well, where emotion would equally play a role.",
"Overall, our results also suggest the superiority of MTL over STL for abuse detection.",
"With this new approach, one can build more complex models introducing new auxiliary tasks for abuse detection.",
"For instance, we expect that abuse detection may also benefit from joint learning with complex semantic tasks, such as figurative language processing and inference."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"result",
"objective",
"objective",
"abstain",
"objective",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"other",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain"
] |
[
"Natural disasters (e.g., hurricanes) affect millions of people each year, causing widespread destruction in their wake.",
"People have recently taken to social media websites (e.g., Twitter) to share their sentiments and feelings with the larger community.",
"Consequently, these platforms have become instrumental in understanding and perceiving emotions at scale.",
"In this paper, we introduce HURRICANEEMO , an emotion dataset of 15,000 English tweets spanning three hurricanes: Harvey, Irma, and Maria.",
"We present a comprehensive study of fine-grained emotions and propose classification tasks to discriminate between coarse-grained emotion groups.",
"Our best BERT (De-vlin et al., 2019) model, even after task-guided pre-training which leverages unlabeled Twitter data, achieves only 68% accuracy (averaged across all groups).",
"HURRICANEEMO serves not only as a challenging benchmark for models but also as a valuable resource for analyzing emotions in disaster-centric domains.",
"Natural disasters cause thousands of deaths and displace hundreds of millions each year (Ritchie and Roser, 2020).",
"These catastrophic events not only induce material destruction but also stir an integral part of being human: our emotions.",
"Disasters adversely affect individuals' mental states (Fritz and Marks, 1954; Kinston and Rosser, 1974), and therefore it is no surprise that many take to social media (e.g., Twitter) to share their feelings.",
"Social media websites, as a result, have become an essential platform for understanding the expression and perception of emotions at a significantly larger scale (Mohammad, 2012; Wang et al., 2012; Mohammad and Kiritchenko, 2015; Volkova and Bachrach, 2016; Abdul-Mageed and Ungar, 2017), with far reaching potential influences from academic research to public policy (Dennis et al., 2006; Fritze et al., 2008; Fraustino et al., 2012).",
"While natural language processing methods have been effective for emotion detection (Strapparava and Mihalcea, 2007), existing resources struggle in disaster-centric domains, in part due to distributional shifts.",
"Emotion detection in natural disasters (e.g., hurricanes) requires implicit reasoning not available as surface-level lexical information.",
"For example, in of course, [we] 1 still have the [storm surge] 2 coming, given the context, we can reasonably infer discontent towards the storm surge despite the absence of polarizing words.",
"Therefore, distantly supervised techniques largely based on lexical units (Mohammad and Turney, 2013; Abdul-Mageed and Ungar, 2017) fail to capture this type of deeper semantic phenomena.",
"Our paper presents a comprehensive investigation into perceived emotions in hurricane disasters.",
"To this end, we introduce HURRICANEEMO , a dataset of 15,000 disaster-related tweets (in English) streamed during Hurricanes Harvey, Irma, and Maria, which were devastating tropical storms occurring in the 2017 Atlantic hurricane season (Belles, 2017).",
"Our samples are annotated with fine-grained emotions derived from the Plutchik Wheel of Emotions (Plutchik, 2001), a well-defined ontology of emotion classes commonly used in computational social science (Abdul-Mageed and Ungar, 2017).",
"1 To measure inter-annotator agreement on fine-grained emotion labels, we conceptualize the P lutchik E motion A greement (PEA) metric (3).",
"PEA is intuitively grounded; our human evaluation shows workers agree with PEA's rankings 88% of the time.",
"Furthermore, we perform insightful analyses on implicit and explicit emotions in hurricane tweets (4).",
"Quite surpris-1 Specifically, we use Plutchik-8 and Plutchik-24 emotions.",
"We refer readers to Plutchik (2001) for an in-depth discussion on their conception.",
"ingly, we find consistencies in Plutchik-24 emotion distributions across Hurricanes Harvey, Irma, and Maria.",
"HURRICANEEMO also serves as a challenging new benchmark for large-scale, pre-trained language models.",
"We establish baselines for a coarser Plutchik-8 emotion detection task using BERT (De-vlin et al., 2019) and RoBERTa (Liu et al., 2019) (5).",
"Our experiments reveal: (1) BERT only achieves 64% (averaged) accuracy; and (2) using better pre-trained models (e.g., RoBERTa) does not help, which is a strikingly different trend than most leaderboards (Wang et al., 2018).",
"To better understand their pitfalls, in particular BERT, we conduct a comprehensive error analysis of 200 incorrectly predicted samples.",
"In addition, we incorporate stronger inductive biases into BERT via pretraining on related tasks, which culminates in (av-eraged, absolute) +4% accuracy (6).",
"Finally, we propose unsupervised domain adaptation to bridge the domain gap between existing large-scale emotion datasets (e.g., EMONET (Abdul-Mageed and Ungar, 2017)) and HURRICANEEMO (7).",
"Our code and datasets are made publicly available.",
"2 2 Related Work Emotion detection has been extensively studied in news headlines (Strapparava and Mihalcea, 2007; Katz et al., 2007), blog posts (Aman and Szpakow-icz, 2007), health-related posts (Khanpour and Caragea, 2018), and song lyrics (Strapparava et al., 2012), but only recently, in social media websites (e.g., Twitter, Facebook) (Mohammad, 2012; Wang et al., 2012; Mohammad and Kiritchenko, 2015; Volkova and Bachrach, 2016; Abdul-Mageed and Ungar, 2017).",
"However, emotion detection in disaster-centric domains, despite its practical importance, is limited.",
"Schulz et al. (2013) (single-handedly) annotate 2,200 Hurricane Sandy tweets using Ekman-6 emotions (Ekman, 1992).",
"In contrast, we introduce 15,000 annotated tweets from multiple hurricanes with (much more fine-grained) Plutchik-24 emotions.",
"Unlike Abdul-Mageed and Ungar (2017), we focus on readers' perceived emotions rather than writers' intended emotions.",
"Furthermore, in disaster-centric domains, the lack of labeled data required to train reliable models precludes the use of supervised learning techniques.",
"Several works propose to use labeled data 2 https://github.com/shreydesai/ hurricane from prior (source) disasters to learn classifiers for new (target) disasters (Verma et al., 2011; Nguyen et al., 2017; Imran et al., 2013, 2016; Caragea et al., 2016).",
"However, due to the unique nature of each disaster (e.g., type, geographical location, season, cultural differences among the affected pop-ulation), the source disaster may not accurately re-flect the characteristics of the target disaster (Palen and Anderson, 2016; Imran et al., 2015).",
"Domain adaptation techniques address these challenges by efficiently using large amounts of unlabeled target domain data, consequently outperforming the aforementioned supervised techniques (Alam et al., 2018; Li et al., 2017).",
"Our work contributes to disaster-centric emotion detection in three ways by: (1) introducing a dataset large enough to train supervised classifiers; (2) exploring various forms of pre-training to instill strong inductive biases; and (3) establishing domain adaptation baselines by leveraging emotive samples obtainable via distant supervision.",
"In this section, we present HURRICANEEMO , an annotated dataset of 15,000 English tweets from Hurricanes Harvey, Irma, and Maria.",
"We detail each component, including the initial preprocessing (3.1), annotation procedures (3.2), and the formulation and calculation of inter-annotator agreement (3.3).",
"Ray Chowdhury et al. (2019) release a repository of large-scale Twitter datasets consisting of tweets streamed during the Harvey, Irma, and Maria hurricanes, which we will refer to as HURRICANEEXT (i.e., extended).",
"We use their tweets as a starting point for the construction of our dataset.",
"We perform two types of preprocessing.",
"First, we replace usernames and links with <USER> and <URL> , respectively, then eliminate duplicate tweets.",
"Second, we use filtering techniques to ensure the resulting tweets contain emotive content.",
"We assume a lexical prior over emotion tweets, that is, requiring that an emotive tweet consist of at least one word derived from EMOLEX (Mo-hammad and Turney, 2013).",
"EMOLEX consists of 14,182 crowdsourced words associated with several emotion categories.",
"Critically, these words appear in emotional contexts, but are not necessarily emotion words themselves.",
"For example, payback is related to the emotion anger, but is also used extensively in finance.",
"Significant past work (Bravo-Marquez et al., 2014; Majumder et al., 2017; Giat-soglou et al., 2017) has used this lexicon to bootstrap their emotion datasets, since the alternatives are (1) using unlabeled tweets as-is or (2) using a model to classify emotional tweets.",
"Initially, we started with (1) and did no emotion-related preprocessing.",
"However, the dataset contained many spurious tweets, such as snippets of news articles, that had little to do with emotions.",
"The level of noise rendered the data prohibitively costly to annotate.",
"For (2), there is simply no such large-scale data to train on, and existing resources like EMONET manifest an even stronger prior where tweets are only included if they explicitly contain an emotion hashtag (e.g., #sad , #angry , #happy ).",
"We randomly sample 5,000 tweets each for annotation from the filtered datasets for Harvey, Irma, and Maria; in total, this yields 15,000 annotations.",
"We request workers on Amazon Mechanical Turk to label tweets with a list of Plutchik-24 emotions.",
"Furthermore, to enable fine-grained emotion analysis, we do not crowdsource Plutchik-8 emotions directly.",
"We require that workers reside in the US and have completed 500+ HITs with an acceptance rate 95%.",
"Each HIT is completed by 5 workers.",
"In this section, we elaborate on our PEA metric for computing inter-annotator agreement with fine-grained emotion labels.",
"Challenges.",
"Fine-grained emotion annotation presents several challenges for evaluating inter-annotator agreement.",
"First, because a tweet can convey multiple emotions, we allow workers to select more than one Plutchik-24 emotion.",
"This implies an agreement metric must support scoring sets of categorical values.",
"Passonneau (2004) use set distance metrics for capturing agreement between coreference cluster annotations.",
"Similarly, Wood et al. (2018) incorporate Jaccard's similarity in Krippendorff's alpha.",
"However, these methods would penalize fine-grained emotions equally, which is not ideal.",
"For the Plutchik wheel, the proximity of any two emotions represents their relatedness.",
"For example, TRUST and ADMIRATION belong to the same emotion group while LOATHING and ADMIRATION are orthogonal to each other.",
"PEA Scores.",
"We introduce the P lutchik E motion A greementhereafter referred to as PEAto address these challenges.",
"We superimpose a unit circle onto the Plutchik wheel, representing each Plutchik-8 emotion as a polar coordinate (e.g., DISAPPROVAL = ( 22 , 2 2 ) ).",
"Intuitively, the an-gles between Plutchik-8 emotions represent how similar or dissimilar they are.",
"If two Plutchik-24 annotations belong to the same Plutchik-8 group, we do not penalize them (e.g., JOY and ECSTASY incur no penalty).",
"Otherwise, we enforce a linear penalty based on how radially separate the annotations are (e.g., ECSTASY and GRIEF incur the highest penalty).",
"Higher PEA scores imply more agreement.",
"Example.",
"Figure 1 visualizes our metric.",
"In this example, two annotators select emotions with radians 3 2 and 4 , respectively.",
"The | f ( e ( i ) x ) f ( e ( j ) y ) | term evaluates to 5 4 .",
"Then, it is normalized using 1 , yielding 54 = 1 .",
"25 .",
"Finally, we subtract to obtain the agreement score: | 1 1 .",
"25 | = 0 .",
"25 .",
"Intuitively, this makes sense as the decisions are only slightly better than being in complete disagreement (i.e., orthogonal).",
"Formulation.",
"For clarity, we introduce notation.",
"Let w x and w y denote workers with (categorical) annotation sets { e ( i ) x } ni =1 and { e ( j ) y } mj =1 , respectively.",
"The pairwise agreement d ( w x , w y ) between the workers is computed as: 1 n n (cid:88) i =1 max j (cid:0) | 1 1 | f ( e ( i ) x ) f ( e ( j ) y ) || (cid:1) Vocabulary Features (%) Hurricane Orig.",
"where 1 is a normalizing constant and f : R is a map from Plutchik-8 emotions to radians.",
"Given a collection of workers that annotated a tweet, we obtain per-worker PEA scores by averaging over all possible pairwise agreements.",
"For example, if workers w 1 3 annotated the same tweet, PEA( w 1 ) = 12 ( d ( w 1 , w 2 ) + d ( w 1 , w 3 )) .",
"For quality control, we filter annotations from workers with PEA 0.55.",
"This threshold is determined through manual inspection of 50 workers and their annotations.",
"The (averaged, per-worker) PEA scores for each hurricane are: Harvey (65.7), Maria (67.3), and Irma (70.3).",
"3 Human Evaluation.",
"We perform a human evaluation with our proposed metric, which is absent in previous work for measuring inter-annotator agreement for emotion annotations (Wood et al., 2018; hman et al., 2018).",
"Crowdsourced workers are asked to determine the agreement between two annotation pairs constructed from three annotators, that is, A: ( e 1 , e 2 ) and B: ( e 1 , e 3 ) .",
"They choose between three options: (1) A has higher agreement than B; (2) A and B have (roughly) the same agreement; and (3) B has higher agreement than A. 88.2% of the worker rankings match with PEA's rankings, pointing towards strong human agreement.",
"The workers themselves in this study also show good agreement according to Krippendorff's alpha ( = 74.0) (Artstein and Poesio, 2008).",
"4 4 Qualitative Analysis 4.1 Dataset Overview Table 1 presents several statistics of HURRICANEEMO .",
"We make three observations.",
"First, the 3 A reasonable interpretation of PEA scores may be as follows: 025 (no agreement), 2550 (poor agreement), 5075 (moderate agreement), 75100 (high agreement).",
"4 See Appendix B for details on our procedures.",
"vocabularies across all datasets are large considering there are only 5,000 tweets per hurricane.",
"The vocabularies do decrease by about 30% after preprocessing, although the resulting sizes still suggest users use a myriad of words to express their emotions.",
"Second, only about 50% of Harvey tweets and 40% of Irma/Maria tweets contain hashtags.",
"Hashtags are a unique marker of Twitter discourse (Ritter et al., 2011), but in our dataset specifically, hashtags are used to tag particular entities, spread disaster-relief awareness, and create trending content.",
"This phenomena alone makes our tweets different from those collected through distant supervision (Abdul-Mageed and Ungar, 2017).",
"Third, roughly 80-85% of tweets contain links to third-party content.",
"Users commonly use links to share news articles, resources for humanitarian aid, and other miscellaneous multimedia.",
"Table 2 shows three samples from HURRICANEEMO .",
"Unlike EMONET (Abdul-Mageed and Ungar, 2017), our dataset does not have the strong assumption that only one emotion can be expressed in a tweet.",
"For example, the first tweet lexically points towards the expression of more than one emotion.",
"The predicate helped us implies the user admires Mexico for providing aid, and the exclamation mark is indicative of JOY .",
"In addition, our samples contain a mix of implicit and explicit emotions, which lexical information alone cannot resolve.",
"In the third tweet, there are no particular words that point towards ANGER and ANNOYANCE , but we can infer the user is upset that the media is not prioritizing Hurricane Maria.",
"Finally, our emotion prediction tasks cannot be solved by simply retrofitting pre-trained word embeddings (Mikolov et al., 2013; Pennington et al., 2014) or contextualized representations (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019), which we also empirically show in our experiments (5).",
"These methods work best for explicit emotion detection as they largely overfit to sparse lex-Plutchik-8 Plutchik-24 Emotion Abbrv.",
"ical features.",
"Rather, in order to capture implicit emotions, models must carry an inductive bias that appropriately reasons over the context (e.g., what event(s)",
"occurred?) and semantic roles (e.g., what happened to whom?) while balancing the aforementioned features.",
"We begin to analyze the fine-grained emotions present in our datasets.",
"We ask the following questions: What is the general distribution of emotions?",
"Are certain emotion groups highlighted more than others?",
"How does the distribution change across hurricanes?",
"Figure 2 shows Plutchik-24 emotion distributions for Hurricanes Harvey, Irma, and Maria.",
"From these plots, a couple of trends emerge.",
"First, the Plutchik-24 emotion counts are within the ballpark of each other with the notable exceptions of ADMIRATION and FEAR .",
"This suggests that, on average, hurricane disasters evoke a similar spread of implicit and explicit emotions among most emotion categories.",
"Second, users tend to post more optimistic content during hurricane disasters.",
"We hy-Figure 2: Per-hurricane emotion counts where each box's Plutchik-8 emotion is broken down into its respective Plutchik-24 emotions.",
"Plutchik-24 emotions are abbreviated using the codes in Table 3.",
"pothesize that users use Twitter as a social platform to spread awareness of the hurricanes themselves or post-disaster relief efforts, commonly using hashtags like #prayfortexas , #floridaevacuation , and #donationdrive .",
"It is encouraging to see that although users do express natural emotions such as fear, sadness, and anger, many seek to help others in the face of adversity.",
"Third, sharp changes in emotion counts between Harvey and Irma may be tied to their history.",
"In the 2017 Atlantic hurricane season, Harvey materialized as a Cat-4 hurricane, and Irma followed around two weeks later as a Cat-5 hurricane.",
"5 Through side-by-side comparisons of both hurricanes' tweets, we found the Irma tweets had more descriptions of destruction and its aftermath.",
"These changes in discourse potentially explain shifts between the emotion distributions.",
"Thus far, we have analyzed each Plutchik-24 emotion in isolation.",
"In this section, we ask the following questions: How do Plutchik-8 emotion groups co-occur with one another?",
"Do co-occurrence patterns change across hurricanes?",
"Figure 3 shows co-occurrence heatmaps for each hurricane.",
"Intuitively, we see strong correlations between polarized emotions, that is, emo-5 Abbreviations for Categoryx .",
"This refers to the Saffir-Simpson scale for classifying hurricanes based on sustained wind speed, which ranges from 1-5 in order of severity.",
"tions categorized as positive and negative .",
"For example, ( LOVE , AGGRESSIVENESS ) does not appear as frequently as ( LOVE , OPTIMISM ) or ( CONTEMPT , AGGRESSIVENESS ).",
"However, this premise does not always hold; the pairs ({ DISAPPROVAL , REMORSE }, OPTIMISM ) also co-occur across all hurricanes.",
"Representative of this phenomenon is the tweet: I'm raising money for Hurricane Maria Destroyed Everything. Click to Donate: <URL> via <USER> .",
"The user indicates disapproval towards the hurricane by evoking pathos, but also shows optimism by donating money to a relief effort.",
"Finally, similar to our previous observations (4.2), we notice an increase in co-occurrence frequencies from Harvey Irma.",
"This increase is, somewhat surprisingly, most apparent with ( AWE , OPTIMISM ), although ({ DISAPPROVAL , REMORSE }, AWE ) frequencies also exhibit a noticeable gain.",
"Once again, we posit that users may be expressing their sadness regarding the Cat-4 Cat-5 jump, but at the same time, offering solidarity to those affected by the hurricanes.",
"We now turn to modeling the emotions in HURRICANEEMO .",
"Because Plutchik-24 emotion counts are heavily imbalanced, we group them into Plutchik-8 emotions and consequently create 8 binary classification tasks.",
"The tweets are assorted into their respective label buckets; because tweets may be labeled with more than one emotion, each belongs to one or more buckets.",
"These buckets represent positive samples (i.e., tweets labeled with that emotion).",
"To create negative samples, we sample an equal amount from Plutchik-8 Emotion Train Valid Test Aggressiveness 4,209 526 527 Optimism 11,902 1,488 1,488 Love 2,569 321 322 Submission 6,092 762 762 Awe 7,324 916 916 Disapproval 5,931 741 742 Remorse 7,732 967 967 Contempt 3,763 470 471 Table 4: Train, validation, and test splits for each Plutchik-8 emotion.",
"other buckets.",
"From here, we shuffle the positive and negative samples and perform an 80/10/10 split to create the train, validation, and test sets.",
"6 Table 4 enumerates the splits.",
"We consider both traditional neural models and pre-trained language models.",
"We implement our models in PyTorch (Paszke et al., 2019) and perform all experiments on an NVIDIA Titan V GPU.",
"Training and optimization hyperparameters are detailed in Appendix C. We report mean performance across 10 runs, each with a different random initialization.",
"Below, we elaborate on our models: Traditional Neural Models.",
"Each is equipped with 200D GloVe embeddings pre-trained on 2B tweets (Pennington et al., 2014): (1) Logistic Regression: We average the word embeddings of each token in the sequence (Iyyer et al., 2015); (2) CNN: A word-level CNN (Kim, 2014) with 100 filters of size [3, 4, 5] obtains representations.",
"They are max-pooled and concatenated row-wise.",
"We also experiment with a character-level CNN with filter sizes [5, 6, 7]; (3) GRU: A one-layer, unidirectional GRU (Cho et al., 2014) with a hidden dimension of 100 obtains features, which are mean pooled.",
"For all models, penultimate representations are projected with a weight matrix W R d 2 .",
"Pre-trained Language Models.",
"We fine-tune base versions of BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) using the Hugging-Face Transformers library (Wolf et al., 2019).",
"We 6 We also experimented with keeping all negative samples as opposed to sampling an equal amount.",
"Each binary task had around 5-7x more negative samples; this significantly hurt model performance.",
"Even with a class imbalance penalty, the models almost never predicted positive samples.",
"Note that although, in aggregate, the number of positive and negative samples match, they do not necessarily match in the train, validation, and test splits.",
"use the sentence representations embedded in the [CLS] token, then project it with a weight matrix W R d 2 .",
"The language model and classification parameters are jointly fine-tuned.",
"Table 5 presents our classification results.",
"We make the following observations: BERT consistently outperforms other models on most emotion tasks.",
"BERT shows strong performance across all 8 binary tasks in comparison to traditional neural models and RoBERTa.",
"Unlike most traditional neural models, its accuracy never falls below random chance, showing it captures at least some of the complex phenomena present in our dataset.",
"However, our tasks remain challenging for both types of models alike.",
"For traditional models, word embeddings alone do not provide enough representational power to model our emotional contexts.",
"Although GRUs perform well on EMONET (Abdul-Mageed and Ungar, 2017), we suspect that they simply memorize emotion lexicons (4.1), which is not a notable strategy for capturing implicit emotions.",
"Nevertheless, BERT only obtains an average accuracy of about 64%.",
"This leaves plenty of room for future work; we perform a comprehensive error analysis as a step towards this goal (5.3).",
"Better pre-trained models (e.g., RoBERTa) do not necessarily help performance.",
"Unlike popular benchmarks such as GLUE (Wang et al., 2018) where more pre-training monotonically increases performance, rather encouragingly, we do not observe the same trend.",
"RoBERTa's average performance is around 5% better than GRU's, but still around 6% worse than BERT's.",
"We hypothesize that this drop in performance is attributed to pre-training fine-tuning domain discrepancies.",
"That is, RoBERTa's (additional) pre-training data (e.g., CC-News) may be too distant from Twitter data, which is known for its short contexts and unique vernacular (Ritter et al., 2011).",
"We encourage practitioners to avoid applying state-of-the-art models without augmenting them with task-guided pre-training objectives, as we explore later (6).",
"Using our BERT model, we sample 25 test errors from each of the 8 emotion tasks, yielding a total of 200 errors.",
"We group the errors into the following categories: lexical and syntactic cues (45%), insufficient context (24%), entity mentions (15%), subjective labeling (10%), and unknown reasons (6%).",
"The top three categories are discussed below: Lexical and Syntactic Cues.",
"BERT often relies on surface-level lexical features to make predictions, as do most emotion prediction models.",
"This bias also extends to certain syntactic features, such as punctuation.",
"In pls be safe everyone!!!! , BERT associates the exclamation mark with a positive emotion, but here, the speaker is more concerned.",
"Insufficient Context.",
"Users often comment on events, public policies, or linked content that, by themselves, do not carry features for supervised learning.",
"This type of error is not necessarily a shortcoming of BERT, but rather our dataset.",
"For example, in for [tracy mcgrady] 1 , [hall induction] 2 muted by effects of [hurricane harvey] 3 at home , one use external knowledge to reason between the noun phrases and discern the latent emotions.",
"Entity Mentions.",
"BERT also makes erroneous predictions in the presence of certain entity mentions.",
"For example, BERT classifies this tweet as AGGRESSIVENESS : nytimesworld: mexico offered aid to texas after harvey. but after an earthquake and hurricane, it says all help is needed at home.",
"Here, the user is merely quoting a AGR OPT LOV SBM AWE DSP RMR CNT AVG NO-PRETRAIN 67.6 75.0 54.0 67.4 68.3 55.7 58.5 66.8 64.1 Supervised Transfer EMONET 73.5 75.2 55.2 68.8 67.5 53.1 60.0 71.7 65.6 SENTIMENT 72.8 75.8 62.7 71.0 65.6 53.4 57.0 67.3 65.7 Unsupervised Transfer EMONET 72.1 75.1 54.0 61.0 65.1 54.2 60.7 69.4 63.9 SENTIMENT 69.1 74.9 53.6 66.2 67.3 54.3 57.9 64.4 63.5 HURRICANEEXT 73.6 75.4 69.8 68.9 69.7 57.9 60.2 70.2 68.2 Table 6: Task-guided pre-training accuracies (abbreviations defined in Table 5).",
"news statement as opposed to formulating opinions regarding NY Times' discourse.",
"Because the sentiment towards NY Times is negative in our datasets overall (due to public backlash on its stories), BERT likely capitalizes on this mention-emotion bias.",
"To improve upon our baselines, we explore pretraining as a means of implicitly incorporating an inductive bias into our BERT model.",
"Our hope is that these pre-training tasks will not only make BERT more robust in the Twitter domain, but also provide useful (albeit abstract) knowledge for the end emotion prediction tasks.",
"For brevity, we chiefly focus on BERT, although our methods can be generalized to other pre-trained models.",
"Setup.",
"We explore, in isolation, supervised and unsupervised pre-training tasks.",
"For the supervised setting, we pre-train on a multi-class emotion task (EMONET ) (Abdul-Mageed and Ungar, 2017) and binary sentiment analysis task (SENTIMENT ) (Go et al., 2009).",
"For the unsupervised setting, we pretrain on dynamic masked language modeling (Liu et al., 2019) on (unlabeled) samples from EMONET , SENTIMENT , and HURRICANEEXT (3.1).",
"For both types of tasks, we further pre-train BERT for a fixed number of epochs, then fine-tune it on a HURRICANEEMO task.",
"We compare these results to NO-PRETRAIN , namely the BERT results verbatim from Table 5.",
"We report mean performance across 10 pre-training fine-tuning runs.",
"Further training details, including samples sizes for the pre-training tasks, are available in Appendix D. Results.",
"Table 6 shows the pre-training results.",
"Supervised pre-training significantly helps with 3-4 emotions, but degrades overall performance on 2-4 emotions.",
"We posit SENTIMENT aids emotions with highly predictive features.",
"For example, wtf in it's literally the size of texas. wtf is correlated with AGGRESSIVENESS , but no such lexical cues exist in not all heros wear capes <3 thank you stanley homeless #hurricane evacuee grooms lost pets, which is an AWE sample.",
"The unsupervised pre-training results also show a couple trends.",
"First, EMONET largely hurts downstream performance, especially reducing SUBMISSION accuracy by -6%.",
"Second, SENTIMENT (in its unlabeled form) yields no noticeable benefits.",
"This implies sentiment information is much more valuable, but of course, subject to the fact that the emotion task is heavily aligned with the original sentiment task.",
"Third, we obtain encouraging results with HURRICANEEXT pre-training.",
"The gains are most noticeable on AGGRESSIVENESS and LOVE , but this objective adds +1-2% accuracy for tasks on which supervised pre-training suffered.",
"When new disasters emerge, it is likely we may not have emotion annotations, as alluded to previously (2).",
"Nevertheless, these annotations would be valuable for organizations trying to understand the emotional profile of users during a crisis (Fraustino et al., 2012).",
"In this section, we explore ways to leverage supervision from large-scale emotion datasets (e.g., EMONET (Abdul-Mageed and Ungar, 2017)) in providing labels for our hurricane emotion datasets.",
"We frame this problem as unsupervised domain adaptation; EMONET is the labeled source domain and our hurricane datasets are the unlabeled target domain.",
"Below, we elaborate AGR OPT LOV SBM AWE DSP RMR CNT AVG SRC-ONLY 53.3 42.2 43.4 47.1 54.7 49.8 62.5 56.5 51.2 PRETRAIN-SRC 54.8 43.2 45.1 47.8 54.4 50.4 63.3 57.1 52.0 PRETRAIN-TRG 55.0 44.2 46.2 48.0 55.5 49.9 63.7 60.5 52.9 PRETRAIN-JOINT 52.7 44.2 45.5 47.8 54.8 49.9 61.6 56.3 51.6 TRG-ONLY 67.6 75.0 54.0 67.4 68.3 55.7 58.5 66.8 64.1 Table 7: Unsupervised domain adaptation accuracies (abbreviations defined in Table 5).",
"Framework.",
"EMONET was conceived as a multi-class classification task for Plutchik-8 emotions (Abdul-Mageed and Ungar, 2017).",
"In contrast, we introduce binary classification tasks, one for each Plutchik-8 emotion.",
"We split the EMONET multi-class task into 8 binary tasks; this creates a one-to-one alignment between each source and target domain task.",
"We separately perform unsupervised domain adaptation for each binary task.",
"Methods.",
"We use our BERT model (without task-guided pre-training) as the underlying classifier.",
"Following Han and Eisenstein (2019), we chiefly focus on using strategic pre-training techniques that enable effective transfer between disparate domains.",
"The systems for comparison are: (1) SRCONLY : BERT is trained in the source domain and evaluated in the target domain; (2) TRG-ONLY : BERT is trained and evaluated in the target domain.",
"These results are borrowed verbatim from Table 5; (3) PRETRAIN -*: BERT undergoes dynamic masked language modeling pre-training using data from domain *, is trained in the source domain, and finally evaluated in the target domain (Han and Eisenstein, 2019).",
"PRETRAIN-SRC only uses pre-training samples from the source domain, PRETRAIN-TRG only uses samples from the target domain, and PRETRAIN-JOINT uses samples from both the source and target domains.",
"7 We report mean performance across 10 pre-training fine-tuning runs.",
"Results.",
"Table 7 shows the unsupervised domain adaptation results.",
"Overall, we do not find a significant increase in performance over the SRCONLY baseline.",
"Pre-training consistently adds +1% in average accuracy, but still leaves a large gap between PRETRAIN-SRC and TRG-ONLY .",
"Re-7 PRETRAIN-JOINT is conceptually similar to ADAPTABERT in Han and Eisenstein (2019), however, we dynamically generate pre-training data (Liu et al., 2019).",
"gardless, we have a few observations.",
"First, we do not see a (relatively) large increase in performance for SUBMISSION , AWE , DISAPPROVAL , and REMORSE .",
"These emotions may need more explicit strategies to enable domain adaptation.",
"This is also supported by our previous results (6), where we also do not see a (relatively) large benefit from task-guided pre-training.",
"Second, PRETRAINJOINT performs worse than both PRETRAIN-SRC and PRETRAIN-TRG .",
"We posit that, for our emotion tasks, pre-training with a mixture of domains yields a noisier training signal compared to a parameter bias towards the target domain.",
"We present HURRICANEEMO , an annotated dataset of perceived emotions spanning 15,000 tweets from multiple hurricanes.",
"Tweets are annotated with fine-grained Plutchik-24 emotions, from which we analyze implicit and explicit emotions and construct Plutchik-8 binary classification tasks.",
"Comprehensive experiments demonstrate our dataset is a challenging benchmark, even for large-scale pre-trained language models.",
"We release our code and datasets as a step towards facilitating research in disaster-centric domains.",
"Thanks to Katrin Erk for reviewing an early version of this manuscript, Yasumasa Onoe for discussions on masked language model pre-training, and the anonymous reviewers for their helpful comments.",
"This work was partially supported by the NSF Grants IIS-1850153, IIS-1912887, and IIS-1903963."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"result",
"method",
"method",
"abstain",
"result",
"abstain",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"other",
"other"
] |
[
"Causal inference is the process of capturing cause-effect relationship among variables.",
"Most existing works focus on dealing with structured data, while mining causal relationship among factors from unstructured data, like text, has been less examined, but is of great importance, especially in the legal do-main.",
"In this paper, we propose a novel G raph-based C ausal I nference ( GCI ) framework, which builds causal graphs from fact descriptions without much human involvement and enables causal inference to facilitate legal practitioners to make proper decisions.",
"We evaluate the framework on a challenging similar charge disambiguation task.",
"Experimental results show that GCI can capture the nuance from fact descriptions among multiple confusing charges and provide explainable discrimination, especially in few-shot settings.",
"We also observe that the causal knowledge contained in GCI can be effectively injected into powerful neural networks for better performance and interpretability.",
"Code and data are available at https://github.com/xxxiaol/GCI/ .",
"Causal inference is the process of exploring how changes on variable T affect another variable Y .",
"Here we call T and Y as treatment and outcome , respectively, and the changes on T are called intervention .",
"In other words, the process of drawing a conclusion about whether and how Y changes when intervening on T is called causal inference .",
"Most research in causal inference is devoted to analyzing structured data.",
"Take the research question how smoking causes lung cancer (Pearl and Mackenzie, 2018) as an example.",
"Smoking , lung cancer , together with distractors like age are extracted from structured data, like electronic health records, and considered as factors.",
"Usually, such * Equal contribution.",
"studies properly organize those factors into human-designed structures, e.g., a causal directed acyclic graph (Wright, 1921) with factors { smoking , age , lung cancer } as nodes and causal relations { smoking lung cancer , age lung cancer } as edges, and perform inference on such structures.",
"Recent works attempt to integrate text information into causal inference (Egami et al., 2018; Veitch et al., 2019; Yao et al., 2019; Keith et al., 2020), but they mainly treat the text as a single node in the causal graph, which is relatively coarse-grained.",
"For instance, Yao et al. (2019) investigate how the complaints from consumers affect the company's responses (admission or denial).",
"They regard the entire text of complaint as a treatment, without looking into different aspects of the text, like the events that consumers complained about and the compensation that consumers requested.",
"Actually, discovering causal relationship inside unstructured and high-dimensional data, like text, is also beneficial or even crucial for scenarios involving reading comprehensive text and making decisions accordingly.",
"For instance, when a legal AI system assists judges to deal with complicated cases that involve multiple parties and complex events, causal inference could help to figure out the exact distinguishable elements that are crucial for fair and impartial judgements.",
"As shown in Figure 1, if the system can automatically spot two 1929 essential key points, 1) the deceitful acts of the defendant, and 2) obtaining properties from the victim, from the unstructured fact descriptions, then the prediction fraud can be more convincing and helpful, rather than a label from a black box.",
"More details of GFCI algorithm and its advantage is provided in Appendix A. The output of GFCI is a graphical object called Partial Ancestral Graph (PAG).",
"PAG is a mixed graph containing the features common to all Di-1930 Edge Meaning A B A causes B. A B There is an unobserved confounder of A and B. A B Either A causes B, or unobserved confounder.",
"In practice, we would expect a legal AI system to provide human-readable and sound explanations to help the court make the right decisions.",
"It is worthwhile especially for underdeveloped areas, where such techniques could help the judges of rural areas with more trustworthy references from previous judgements.",
"This would further help to maintain the principle of treating like cases alike in many continental law systems.",
"A main challenge in judicial practice is to distinguish between similar charges, we thus propose a new task similar charge disambiguation as a testbed.",
"Cases of similar charges often share similar context, and a system is expected to mine the nuance of reasoning process from the text.",
"However, performing causal inference on fact descriptions of criminal cases is not trivial at all.",
"It poses the following challenges.",
"1) Without expert involvement, it is not easy to extract factors that are key to prediction, and organize them in a reasonable form that can both facilitate the inference process and tolerate noise.",
"For example, automatically extracted elements may not cover all the key points in the textual descriptions, and automatically built graphs may contain unreliable edges.",
"2) It is not easy to benefit from both traditional causal inference models and modern neural architectures.",
"In this paper, we propose a novel G raph-based C ausal I nference ( GCI ) framework, which could effectively apply causal inference to legal text analysis.",
"GCI first recognizes key factors by extracting keywords from the fact descriptions and clustering similar ones into groups as individual nodes.",
"Then we build causal graphs on these nodes with a causal discovery algorithm that can tolerate unobserved variables, reducing the impact of missing elements.",
"We further estimate the causal strength of each edge, to weaken unreliable edges as much as possible, and apply the refined graph to help decision making.",
"Experimental results show that our GCI framework can induce reasonable causal graphs and capture the nuance from plain text for legal applications, especially with few training data.",
"neural network (NN) models.",
"We propose two approaches: 1) imposing the causal strength constraints to the NN's attention weights; 2) applying recurrent neural networks on the causal chains extracted from our causal graphs.",
"Experiments indicate that our methods can successfully inject the extracted causal knowledge and thus enhance the NN models.",
"We also show that integrating GCI helps to mitigate the underlying bias in data towards the final prediction.",
"Our main contributions are as follows: 1) We propose a novel graph-based causal inference ( GCI ) framework to apply causal inference to unstructured text, without much human involvement.",
"2) We explore to equip popular neural network models with our GCI , by encouraging neural models to learn from causal knowledge derived from GCI .",
"3) We evaluate our methods on a legal text analysis task, similar charge disambiguation, and experimental results show that our GCI can capture the nuance from plain fact descriptions, and further help improve neural models with interpretability.",
"Two types of questions are typically related to causality.",
"The first is whether there is causal relationship between a set of variables, and the second question is when two variables T and Y are causally related, how much would Y change if we change the value of T .",
"Both of them are discussed in our GCI framework, and we first briefly introduce the key concepts.",
"Causal discovery corresponds to the first type of questions.",
"From the view of graph, causal discovery requires models to infer causal graphs from observational data.",
"In our GCI framework, we leverage Greedy Fast Causal Inference (GFCI) algorithm (Ogarrio et al., 2016) to implement causal discovery.",
"GFCI combines score-based and constraint-based algorithms.",
"It merges the best of both worlds, performing well like score-based methods, and not making too unrealistic assumptions like constraint-based ones.",
"Specifically, GFCI does not rely on the assumption of no latent confounders, thus is suitable in our situation.",
"rected Acyclic Graphs (DAGs) that represent the same conditional independence relationship over the measured variables.",
"In other words, PAG entails all the possibilities of valid DAGs concerning the original data.",
"In causal inference settings (Zhang, 2008), four types of edges are provided in PAG, as listed in Table 1.",
"With PAG, we are able to consider unobserved confounders and the uncertainty in causal inference.",
"Causal strength estimation deals with the second type of questions.",
"It is the very task to quantify the causal strength of each learned relation, i.e., whether the relation is strong or weak.",
"To precisely estimate causal strength, confounders need to be kept the same.",
"Confounder is a variable causally influencing both treatment T and outcome Y .",
"Take the example of smoking, lung cancer and age in Section 1.",
"Here we study if there is causal relationship between smoking ( T ) and lung cancer ( Y ), and age is a confounder C .",
"It is straightforward to compare the proportion of lung cancer among smokers and non-smokers.",
"However, age influences both smoking and lung cancer .",
"Older people are more likely to smoke.",
"They also have a much higher risk of suffering from cancer.",
"If we do not consider the value of age , its influence to lung cancer will be regarded as smoking 's influence, thus wrongly amplify the causal effect of smoking to lung cancer .",
"In our GCI framework, we apply Average Treatment Effect (ATE) (Holland, 1986) as a measure of causal strength.",
"All variables are binary in our work.",
"So given an edge T Y , we quantify how the outcome Y is expected to change if we modify the treatment T from 0 to 1: T,Y = E [ Y | do ( T = 1)] E [ Y | do ( T = 0)] , (1) where E means expectation, and the do-calculus do ( T = 1) indicates intervention on T , setting its value to 1 .",
"pairs of comparable samples with the most similar propensity scores, where each pair consists of one sample in the treated group and one in the untreated.",
"Given the great similarity between the two samples, we could make a direct comparison between them.",
"Specifically, propensity score L ( z ) = P ( T = 1 | Z = z ) is the probability of treatment being assigned to 1 given a set of observed confounders z .",
"As T is binary, we have T Z | L ( means independence).",
"So matching on propensity scores equals matching on the full set of confounders.",
"Our graph-based causal inference ( GCI ) framework consists of three parts, constructing the causal graph, estimating causal strength on it, and making decisions.",
"Figure 2 shows the overall architecture.",
"We first define the similar charge disambiguation task.",
"Given the fact descriptions of criminal cases D = { d 1 , d 2 , . . . , d N } , a system is expected to classify each case into one charge from the similar charge set C = { c 1 , c 2 , . . . , c M } .",
"Extracting Factors.",
"To prepare nodes for the causal graph, we calculate the importance of word w j for charge c i using YAKE (Campos et al., 2020).",
"We enhance YAKE with inverse document frequency (Jones, 1972) to extract more discriminative words of each charge.",
"To discriminate the similar charges, we select p words with the highest importance scores for each charge, cluster them into q classes to merge similar keywords.",
"The q classes together with the M charges form the nodes of the causal graph.",
"All these factors are binary.",
"When the graph is applied to a case, each factor is of value 1 if it exists in this case, and 0 if not.",
"Unlike factors extracted by experts, automatically extracted keywords may be incomplete, resulting in unobserved confounders in causal discovery.",
"Learning Causal Relationship.",
"The next step is to build edges for the graph, in other words, discover the causal relationship between different factors.",
"To learn causal relations and tackle the unobserved confounder problem, we use GFCI (Ogarrio et al., 2016), which does not rely on the assumption 1931 Extract Factors LabeledCasesLabeledCases A B C D Y1 Y2 A B C D Y1 Y2 A B C D A B C D A B C D A B C DA B C D 0.9 0.1 Y1 Y2 Y1 Y2 Y1 ABCD Y1Y1 ABC D Y1Y1 A BCD Y1Y1 Y2Y2 ABCD Y1Y1 A B CD Y1 Y2 ... ...",
"We further introduce constraints to filter noisy edges.",
"First, as the judgement is made based on the fact description, we do not allow edges from charge nodes to other ones, e.g., an edge from fraud to lie is prohibited.",
"Second, given that causes usually appear before effects in time (Black, 1956), and fact descriptions in legal text are often written in the temporal order of events, we thus consider the chronological order of descriptions as temporal constraints to filter noisy edges.",
"If factor A appears after B in most cases, we will not allow the edge from A to B .",
"Note that this constraint does not imply there is an edge from B to A , as chronological order is not a sufficient condition of causality.",
"Sampling Causal Graphs.",
"PAG contains uncertain relations shown in Table 1, which leaves challenges for quantification and further application.",
"So we sample Q causal graphs from PAG.",
"Among the four edge types, and are clear: in each sampled graph, edges are retained and edges are removed (because they do not indicate causal relations between the two nodes).",
"For edges, they have two possible choices: being kept (cause) and being removed (unobserved confounder).",
"In the absence of true possibility, we simply keep an edge with 1 / 2 probability, and remove it with another 1 / 2 .",
"And for edges, we give 1 / 3 probability for , , and no edge, respectively.",
"The quality of each sampled graph G q is measured by its fitness with data X , where we use the Bayesian information criterion BIC( G q , X ) to estimate (Schwarz et al., 1978).",
"As the resulting graphs are noisy in nature, we estimate the strength of the learned causal relations to refine a sampled causal graph.",
"We assign high strength to edges with strong causal effect, and near-zero strength to edges that do not indicate causal relations or with weak effect.",
"We regard the Average Treatment Effect GT,Y (ATE, Section 2.2) as the strength of T Y in graph G , and utilize the Propensity Score Matching (PSM, Section 2.2) to measure it: GT,Y = [ (cid:2) i : t i =1 ( y i y j ) + (cid:2) i : t i =0 ( y j y i )] /N, (2) where j = argmin k : t k (cid:3) = t i | L ( z i ) L ( z k ) | means the most similar instance in the opposite group of i , and t i , y i , z i are the value of treatment, outcome and confounders of instance i , respectively.",
"When applying the sampled causal graphs to the similar charge disambiguation task, we simply extract factors and map the case description with the graph accordingly, and decide which charge in C is more appropriate to this case.",
"Firstly, we compute the overall causal strength of each factor T j to Y i among the Q sampled causal graphs, where Y i represents whether charge c i is committed: T j ,Y i = Q (cid:2) q =1 BIC( G q , X ) G q T j ,Y i , (3) 1932 where G q T j ,Y i is the measured causal strength in G q , and is 0 if edge T j Y i does not exist in G q .",
"For each case, we then map the text with the graphs, and calculate scores for each charge: S ( Y i ) = (cid:2) T j Tr ( Y i ) T j ,Y i ( T j ) , i { 1 , . . . , M } , (4) where ( T j ) is a dummy variable indicating the presence of T j in this case, and T r ( Y i ) is the set of treatments of Y i (from the view of graph, the nodes pointing to Y i ).",
"The calculated scores are fed into a random forest classifier (Ho, 1995) to learn thresholds between the charges.",
"More advanced classifiers can also be used.",
"Neural networks (NN) are considered to be good at exploring large volumes of textual data.",
"This motivates us to integrate the causal framework with NN, to benefit each other.",
"Here we propose two integration methods as shown in Figure 3.",
"First, we inject the estimated causal strength to constrain the attention weights of a Bi-LSTM with attention model (Zhou et al., 2016).",
"A Bi-LSTM layer is first applied to the fact descriptions to obtain contextual embeddings H = { h 1 , h 2 , . . . , h n } , h i R b 0 , where b 0 is the dimension of embeddings.",
"Then, an attention layer assigns different weights { a 1 , a 2 , . . . , a n } to each word, and sums the words up according to the weights to build a text embedding v : a i = exp( q T h i ) (cid:3) nk =1 exp( q T h k ) , v = n (cid:2) i =1 a i h i , (5) where q R b 0 is a learnable query vector.",
"Finally, we apply two fully connected layers to the text embedding v , and form the prediction vector r cons .",
"Besides a cross-entropy loss L cross on r cons , we introduce an auxiliary loss L cons to guide the attention module with the causal strength learned from GCI .",
"Given the golden label c j , for each word w i which belongs to the factor f , T f ,Y j is the corresponding causal strength, and g i is the normalized strength over the whole sequence.",
"L cons is set to make the attention weights close to the normalized LSTMA.",
"Note that in the validation and testing stages, the inputs do not contain any strength constraint and golden charge information.",
"Therefore, we select the epoch with the least cross-entropy loss in the validation stage to evaluate on the test set.",
"Causal chains are another type of knowledge that can be captured from causal graphs.",
"In the legal scenario, causal chains depict the process of committing crimes.",
"They can also be treated as the summarization of cases or behavioural patterns for each charge.",
"Therefore, the second approach is to leverage the causal chains directly, as the chains may contain valuable information for judgement.",
"For a given text, we extract factors and traverse all causal chains composed by the factors from the sampled causal graphs, Chains = { chain 1 , chain 2 , . . . , chain m } .",
"In this task, we only consider chains ending up with treatments of charges, as they are more relevant with the judgement.",
"An LSTM layer is applied to each chain, and all the chains are pooled to build case representation c R b 0 : ch i = l i (cid:2) j =1 (LSTM( chain i ) j ) , c = MaxPooling(BIC( G q , X ) ch i ) , 1 i m, chain i G q , (7) where l i indicates the length of chain i .",
"The case representation c is then fed to two fully connected layers to make the prediction r chain , and a cross-entropy loss is used to optimize the model.",
"1933 Charge Sets Charges #Cases Personal Injury Intentional Injury & Murder & 6377 / 2282 / Involuntary Manslaughter 1989 Violent Acquisition Robbery & Seizure & 5020 / 2113 / Kidnapping 622 F&E Fraud & Extortion 3536 / 2149 E&MPF Embezzlement & 2391 / 1998 Misappropriation of Public Funds AP&DD Abuse of Power & 1950 / 1938 Dereliction of Duty Table 2: Summary of the similar charge sets.",
"Causal Chains vs. Whole Text.",
"Both CausalChain and LSTM are a straightforward application of unidirectional LSTM, but over different texts, one for our extracted causal chains and the other for the whole fact description.",
"We find CausalChain outperforms LSTM by 8.2% on average Acc and 11.7% on average F1.",
"The difference shows that causal chains contain condensed key information that contributes to the judgement, while the whole description may contain far more irrelevant information that may disturb the prediction.",
"We also conduct experiments on combining causal chains and the 1934 Models Personal Violent F&E E&MPF AP&DD Average Injury Acquisition LSTM 1% 60.94 / 37.91 58.48 / 29.33 63.91 / 47.00 53.56 / 39.84 52.08 / 46.13 57.79 / 40.04 5% 61.97 / 44.88 67.09 / 35.86 71.60 / 68.68 59.89 / 56.88 54.12 / 48.53 62.93 / 50.97 10% 76.45 / 67.81 65.64 / 47.62 82.14 / 80.74 70.21 / 70.00 55.46 / 51.29 69.98 / 63.49 30% 85.37 / 81.27 74.43 / 66.05 88.10 / 87.33 71.60 / 70.82 65.61 / 65.19 77.02 / 74.13 50% 85.67 / 83.02 80.10 / 72.27 90.04 / 89.06 75.59 / 75.46 69.65 / 69.62 80.21 / 77.89 Bi-LSTM 1% 62.29 / 40.81 53.86 / 33.25 62.95 / 43.27 54.54 / 41.91 48.98 / 37.84 56.52 / 39.42 5% 74.00 / 69.52 65.18 / 38.99 60.34 / 56.96 61.88 / 61.63 51.77 / 46.23 62.63 / 54.66 10% 76.66 / 71.86 67.10 / 46.07 85.31 / 84.37 60.08 / 53.34 60.20 / 57.95 69.87 / 62.72 30% 85.46 / 82.53 75.30 / 64.12 87.57 / 86.58 70.45 / 69.64 65.45 / 65.12 76.85 / 73.60 50% 87.19 / 85.01 78.43 / 69.94 90.43 / 89.83 76.08 / 75.78 71.12 / 70.50 80.65 / 78.21 GCI 1% 69.54 / 49.77 57.08 / 42.55 82.81 / 82.56 74.65 / 70.22 62.47 / 61.72 69.31 / 61.36 5% 81.19 / 75.58 69.70 / 60.39 88.25 / 87.24 83.27 / 83.06 78.09 / 77.95 80.10 / 76.84 10% 80.33 / 74.50 74.06 / 67.31 87.97 / 87.51 85.23 / 84.62 78.36 / 78.31 81.19 / 78.45 30% 84.83 / 80.10 75.99 / 70.64 89.31 / 88.39 88.55 / 88.21 80.82 / 80.56 83.90 / 81.58 50% 85.72 / 81.62 76.31 / 71.45 90.41 / 89.14 89.01 (cid:2) / 88.63 (cid:2) 81.01 (cid:2) / 80.90 (cid:2) 84.49 / 82.35 GCI-co 1% 67.49 / 44.43 63.70 / 34.64 75.72 / 67.60 69.08 / 67.20 64.93 / 64.41 68.19 / 55.66 5% 76.70 / 63.94 67.65 / 34.35 86.63 / 85.81 82.23 / 81.86 73.94 / 73.77 77.43 / 67.95 10% 68.05 / 45.37 69.26 / 46.39 85.62 / 84.41 81.23 / 79.64 74.21 / 74.05 75.67 / 65.97 30% 77.31 / 63.45 70.42 / 50.94 81.44 / 80.54 85.71 / 85.20 74.43 / 74.28 77.86 / 70.88 50% 79.21 / 69.37 70.38 / 50.78 79.30 / 77.58 84.39 / 83.72 74.16 / 73.99 77.49 / 71.09 CausalChain 1% 73.20 / 60.31 63.60 / 44.02 68.01 / 52.93 66.97 / 56.66 63.13 / 62.30 66.98 / 55.24 5% 81.99 / 76.03 70.57 / 59.85 88.64 / 87.21 75.13 / 74.74 71.75 / 70.38 77.62 / 73.64 10% 81.21 / 74.71 73.50 / 66.66 87.59 / 86.36 79.75 / 79.45 74.43 / 74.11 79.30 / 76.26 30% 85.61 / 81.00 74.93 / 67.30 89.10 / 88.19 81.63 / 81.25 80.90 / 80.50 82.43 / 79.65 50% 86.41 / 83.11 75.66 / 68.47 90.45 / 89.21 81.25 / 80.09 80.03 / 79.89 82.76 / 80.16 Bi-LSTM+Att 1% 62.16 / 41.70 58.21 / 32.97 67.99 / 62.80 57.90 / 50.67 53.20 / 41.78 59.89 / 45.99 5% 78.29 / 72.81 67.50 / 50.68 85.30 / 84.28 61.86 / 55.38 58.76 / 53.03 70.34 / 63.23 10% 81.51 / 78.36 67.97 / 58.26 88.07 / 87.33 75.38 / 74.86 58.82 / 55.82 74.35 / 70.93 30% 86.07 / 83.49 80.47 / 72.55 88.97 / 88.41 81.53 / 81.14 72.84 / 72.65 81.98 / 79.65 50% 87.25 / 85.38 82.27 / 74.15 91.56 / 91.05 82.29 / 82.11 73.70 / 73.65 83.41 / 81.27 Bi-LSTM+Att+Cons 1% 70.12 / 59.46 54.29 / 40.34 78.25 / 76.80 61.03 / 60.62 53.84 / 44.93 63.51 / 56.43 5% 79.07 / 75.89 73.09 / 56.84 86.80 / 86.35 66.86 / 59.89 72.27 / 72.18 75.62 / 70.23 10% 83.33 / 79.70 76.26 / 64.62 88.76 / 88.02 80.03 / 79.64 73.53 / 73.48 80.38 / 77.09 30% 86.55 / 83.85 81.48 / 73.15 89.80 / 89.35 81.82 / 81.31 79.46 / 79.35 83.82 / 81.40 50% 88.31 (cid:2) / 86.18 (cid:2) 82.72 (cid:2) / 76.03 (cid:2) 92.05 (cid:2) / 91.55 (cid:2) 83.02 / 82.69 80.72 / 80.64 85.36 (cid:2) / 83.42 (cid:2) Table 3: Performance on similar charge disambiguation.",
"Dataset.",
"For the similar charge disambiguation task, we pick five similar charge sets from the Criminal Law of the People's Republic of China (Congress, 2017), which are hard to discriminate in practice (Ouyang et al., 1999), and select the corresponding fact descriptions from the Chinese AI and Law Challenge (CAIL2018) (Xiao et al., 2018).",
"Detailed statistics of the charge sets are given in Table 2.",
"Note we filter out the cases whose judgements include multiple charges from one charge set.",
"The fact descriptions in our dataset are in Chinese.",
"Our Models.",
"We evaluate our graph-based causal inference ( GCI ) framework as described in Section 3, and two models integrating GCI with NN ( Bi-LSTM+Att+Cons and CausalChain ) as described in Section",
"4. Comparison Models.",
"To study the effect of causal relationship captured by GCI , we implement a variant called GCI-co , which is built upon a correlation-based graph rather than our discovered causal graph.",
"In detail, we compute the Pearson correlation coefficient for every two factors, and draw an edge if > 0 .",
"5 .",
"The direction of the edge is from the factor that appears earlier in the text more often, to the other.",
"Then we compare GCI and two integration methods with NN baselines, including LSTM , Bi-LSTM and Bi-LSTM+Att .",
"Bi-LSTM+Att is a common backbone of legal judgement prediction models, while we do not add multitask learning (Luo et al., 2017) and expert knowledge (Xu et al., 2020) for simplicity.",
"Since the prior knowledge learned from pre-trained models may result in unfair comparison, we do not choose the models such as BERT (Devlin et al., 2018) as baselines and backbones to eliminate the influence.",
"Previous works integrating text into causal inference are not able to find causal relationships inside text, so we do not take them into comparison.",
"We select a set of training ratios, 1%, 5%, 10%, 30%, and 50%, to study how the performance gap changes along with different training data available.",
"For each setting, we run the experiments on three random seeds and report the average accuracy (Acc) and macro-F1 (F1).",
"More details about baselines, parameter selection, and training process are in Appendix B. 5.2 Main Results Table 3 reports the charge disambiguation performance of our models and comparison models.",
"Causal Graph vs. Correlation-based Graph.",
"GCI outperforms GCI-co by 4.5% on average Acc, and 9.8% on average F1, indicating the graph constructed by mining causal relations better captures the relationship between charges and factors.",
"Causal Inference vs. Neural Networks.",
"Comparing GCI with NN baselines LSTM , Bi-LSTM and Bi-LSTM+Att , we observe in few-shot settings (1%, 5%), GCI outperforms NNs by about 10% on average, since NNs tend to underfit in few-shot settings.",
"However, with the increase of training data, the performance gap becomes narrower and consequently, NNs outperform GCI in several cases.",
"Compared with GCI , NNs have the advantage of learning from large amounts of unstructured data.",
"Adding Strength Constraints.",
"We can see that Bi-LSTM+Att+Cons outperforms Bi-LSTM+Att by around 1-5%.",
"The performance gap is much larger in few-shot settings.",
"This suggests that our estimated causal strength is helpful for attention-based models to capture the key information in the text.",
"whole plain text, but simply concatenating them does not work well since the whole text may introduce noise, and better integration methods are needed, which we leave for future work.",
"To analyze the robustness of the causal discovery process, we apply sensitivity analysis on the causal graphs.",
"In detail, we make disturbance to the original causal relations, and examine the sensitivity of causal effect towards the violations.",
"Following Kiciman and Sharma (2018), we use three refuters for examination: 1) Random Confounder , a new confounder with random value is added to the graph, and ideally, the causal strength should remain the same as before.",
"2) Placebo Treatment , the value of a treatment is replaced by a random value, so the treatment becomes a placebo, and the strength should be zero.",
"3) Subset of Data , we use a subset of cases to recalculate the strength.",
"Ideally, the strength estimation will not vary significantly.",
"We take a sampled causal graph of F&E (Fraud & Extortion) as an example, and exhibit the refuters on the treatments of charge extortion in Figure",
"4. Causal strength is almost the same as before after Random Confounder and Subset of Data refutation; and turns to nearly zero after Placebo Treatment .",
"The results show that our graph construction method is robust against disturbance.",
"Causal chains manifest how effective GCI is in terms of inducing common patterns of suspect's",
"1935",
"6.3 Effect of Integrating Causal Strength with Attention Following Lei et al. (2017), we conduct human evaluation on words accorded with high attention Charge Sets Bi-LSTM+Att Bi-LSTM+Att+Cons Personal Injury 3.03 3.17 Violent Acquisition 3.18 3.74 F&E 3.34 3.65 E&MPF 3.13 3.27 AP&DD 3.08 3.13 Table 4: Results of human evaluation.",
"behaviours.",
"It is helpful for people to better understand the core part of legal cases.",
"Here we select the causal graph of Personal Injury (Intentional Injury & Murder & Involuntary Manslaughter) and showcase several causal chains underlying the text.",
"As shown in Figure 5, the chains depict common patterns of these charges, from the initial causes, to the final criminal behaviour.",
"More examples are provided in Appendix D. Now the question is how the graph structures help to discriminate similar charges.",
"Here we analyze the nuance between the causal chains of two similar charges E&MPF (Embezzlement & Misappropriation of Public Funds).",
"For both charges, the cases often first give the background that someone held a certain position of authority, thus had power.",
"Then both kinds of cases describe that the person illegally obtained a large amount of money by utilizing his power.",
"While the cases in E&MPF share very similar context, there exists nuance between them: embezzlement emphasizes that someone privately and illegally possesses the bulk of money, while misappropriation of public funds emphasizes that someone would temporally use the money for a certain purpose.",
"By observing the causal chains of two charges, GCI could capture the slight difference well: for embezzlement, the causal chains tend to be work / take charge of take advantage of (position/power) ; for misappropriation of public funds, the causal chains tend to be take charge of take advantage of (position/power) misappropriate profit .",
"The difference between the former and latter chains is whether the person had subsequent behaviour (e.g., using the money for purposes like making profits).",
"We could observe that the difference in causal chains accords with the definitions of the two charges.",
"weights and compare the evaluation results of standard ( Bi-LSTM+Att ) and constraint-based attention models ( Bi-LSTM+Att+Cons ).",
"For each set of charges, we train both models with 10% data, and randomly select 30 cases that both models predict correctly.",
"For the total 150 cases, we showcase content, charge names, attention weights above 0.05, and corresponding words.",
"Each participant is asked to score from 1 to 5 for the extent of how beneficial the extracted keywords are to disambiguation.",
"A higher score means that the attention weights indeed capture the keywords.",
"Each case is assigned to at least four participants.",
"Results are shown in Table",
"4. We observe that the constraint-based model is better at explanation than normal attention-based models on all five charge groups.",
"Take Violent Acquisition as an example.",
"Although the cases are predicted correctly by Bi-LSTM+Att , the model tends to attend to words bag , RMB and value , which frequently occur but cannot be treated as clues for judgement.",
"Instead, Bi-LSTM+Att+Cons values factors such as grab , rob and hold , which are more helpful for judgement.",
"Causal Inference with Text.",
"Recently, a few works try to take text into account when performing causal inference.",
"Landeiro and Culotta (2016) introduce causal inference to text classification, and manage to remove bias from certain out-of-text confounders.",
"Wood-Doughty et al. (2018) use text as a supplement of missing data and measurement error for the causal graphs constructed by structured data.",
"Egami et al. (2018) focus on mapping text to a low-dimensional representation of the treatment or outcome.",
"Veitch et al. (2019) and Yao et al. (2019) treat text as confounder and covariate, which help to make causal estimation more accurate.",
"These works all build causal graphs manually, and regard text as a whole to be one of the factors.",
"In contrast, we set our sights on text containing rich causal information in itself.",
"Paul (2017) looks into text by computing propensity score for each word, but only 1936 focuses on causal relationship between words and sentiment.",
"We instead take a causal graph perspective, discover and utilize causal relationship inside text to perform reasoning.",
"Neural Networks for Causal Discovery.",
"Recently, researchers attempt to apply neural networks to causal discovery (Ponti and Korhonen, 2017; Alvarez-Melis and Jaakkola, 2017; Ning et al., 2018; Gao et al., 2019; Weber et al., 2020).",
"However, Alvarez-Melis and Jaakkola (2017) model causal relationship by correlation, which may introduce bias into causal inference; Ning et al. (2018) and Gao et al. (2019) merely focus on capturing causality by explicit textual features or supervision from labeled causal pairs.",
"There are also a line of works focusing on how to use neural networks to summarize confounders and estimate treatment effects (Louizos et al., 2017; Yao et al., 2018; Knzel et al., 2018), which are parts of the whole causal inference process.",
"Weber et al. (2020) show how to formalize causal relationships in script learning, but it is limited to pairwise learning of events and cannot be generalized to sequential and compositional events.",
"Legal Judgement Prediction.",
"Previous works in legal text analysis focus on the task of legal judgement prediction (LJP).",
"Luo et al. (2017) and Zhong et al. (2018) exploit neural networks to solve LJP tasks.",
"Zhong et al. (2020) provide interpretable judgements by iteratively questioning and answering.",
"Another line pays attention to confusing charges: Hu et al. (2018) manually design discriminative attributes, and Xu et al. (2020) use attention mechanisms to highlight differences between similar charges.",
"Using knowledge derived from causal graphs, GCI exhibits a different and interpretable discrimination process.",
"Although GCI is effective on its own and when working with powerful neural network models, there is still room for further improvement.",
"More Precise Causal Inference Models.",
"The causal estimation results of GCI is based on the constructed causal graphs in former stages, and the automated construction process may bring imprecise factors and even omissions.",
"As clustering algorithms try to summarize the general characteristics of text, descriptions with subtle differences may be clustered into one factor, but the differences matter in legal judgement.",
"For example, in Personal Injury's graph, different ways of killing are summarized as the factor kill , therefore lose valuable information.",
"Specifically, beaten to death might occur in cases of involuntary manslaughter, while shooting cases are more likely to be associated with murder.",
"Also, factors with low frequency may be omitted in clustering, but are actually useful for discrimination.",
"Overall, under the circumstance without much expert effort, it is worthwhile to explore how to construct a more reliable causal graph.",
"Deep understanding on legal documents.",
"Although GCI to some extent tackles the challenges of incapability of causal inference methods on unstructured text, it may make mistakes when facing complex fact descriptions.",
"Negation semantics is a typical example.",
"It is occasional to see negation word usage in fact descriptions, which usually indicates that someone did not have a certain behaviour.",
"However, GCI has not considered this aspect, and may be awry in cases containing negation usage.",
"Besides, pronoun resolution is also an important aspect that may confuse models.",
"For example, certain behaviour is done by the victim and the subject of the behaviour is a pronoun.",
"If the model was unaware of the subject of the behaviour, it would be counted as criminal's behaviour and introduces noise to later inference stage.",
"Moreover, intent could be a decisive factor when discriminating charges, for example, murder and involuntary manslaughter.",
"But it may not be mentioned explicitly in the fact descriptions.",
"It should be better to recover the complete course of fact and recognize the implicit intents between the lines with deep understanding of the context and relevant common sense knowledge.",
"We propose GCI , a graph-based causal inference framework to discover causal information in text, and design approaches to integrate causal models and neural networks.",
"In the similar charge disambiguation task, we show our approaches capture important evidence for judgement and nuance among charges.",
"Further analysis demonstrates the quality of causal graphs, value of causal chains, and interpretability of computed causal strength.",
"1937 10 Ethical Considerations 10.1 Intended Use We aim to facilitate legal service with our proposed GCI , providing valuable evidence instead of directly making judgements.",
"Acknowledgments This work is supported in part by National Hi-Tech R&D Program of China (2018YFC0831900).",
"We would like to thank the anonymous reviewers for the helpful discussions and suggestions.",
"Also, we would thank Yuxuan Lai, Liunian Harold Li, Jieyu Zhao, Chen Wu and Xingchen Lan for advice about 1938 experiments and writing.",
"We hope that legal AI models could assist legal workers in underdeveloped areas, helping them to explore key points in cases, discriminate from similar crimes, and make better decisions.",
"By treating like cases alike in different regions of the country, influences of judges' intellectual flaws and arbitrariness are weakened, and rights of people, both the defendants and the victims, are protected.",
"Failure Mode.",
"The model may give wrong evidence in some cases, but this will not cause signifi-cantly bad impact.",
"The process of GCI is transparent.",
"Extracted factors, the causal relations between them, and causal chains that lead to the final decision are all shown to users.",
"By checking these rationales of model reasoning, users can clearly find where goes wrong, and not adopt the outputs, or intervene and correct the model.",
"Misuse Potential.",
"We emphasize that such a model cannot be used individually, as the trial process is seriously conducted and regulated by the judicial system.",
"In the actual judicial process, the prosecutors, judges, and lawyers are under strict supervision.",
"We do not think there is a possibility for them to misuse computer models.",
"Criminal behaviour is very unbalanced in gender.",
"Take the three charges in our Personal Injury charge set as an example.",
"Lin and Zou (2020) counted gender ratio in criminal cases from 2013 to 2017 in China, and the ratios of male defendant are 94.70% (Intentional Injury), 87.72% (Murder), and 93.97% (Involuntary Manslaughter).",
"The disparity in defendants of male and female leads to the small proportion of female cases in training corpus.",
"Therefore, female cases may be inadequately trained.",
"If this results in more incorrect predictions for female cases, women's rights are violated.",
"Following Dixon et al. (2018) and Park et al. (2018), we use False Positive Equality Difference (FPED) and False Negative Equality Difference (FNED) to examine the performance difference in Metrics Bi-LSTM+Att Bi-LSTM+Att+Cons FPED 0.048 0.032 FNED 0.065 0.049 Table 5: Results of equality difference.",
"two genders.",
"They are defined as: FPED = (cid:2) t T | FPR FPR t | , FNED = (cid:2) t T | FNR FNR t | , (8) where FPR is false positive rate of classification, FNR is false negative rate, and T = { male, female } .",
"The two metrics quantify the extent of variation between the performances of two genders.",
"Applying them to Bi-LSTM+Att and Bi-LSTM+Att+Cons models in Personal Injury charge set, the results are shown in Table",
"5. The model with causal constraints achieves smaller variance measured by both metrics, which reduce between 1 / 4 to 1 / 3 of the unfairness in performance of Bi-LSTM+Att .",
"This shows the superiority of our model with causal knowledge.",
"Compared with normal neural networks, our constraint-based model utilizes causal relations, which are more stable to the number of occurrences.",
"Though adding causal knowledge narrows the equality difference, it still exists in Bi-LSTM+Att+Cons (the metrics are greater than zero).",
"Other types of bias may also exist in our model, given that the training corpora contain decisions of humans and systemic bias of humans may be preserved.",
"Further debiasing method is needed if the model is put into real use.",
"In general, we believe that adding causal knowledge to decision making will help debiasing, and the transparent exhibition of causal graphs and chains will enable people to find biases in time and correct them.",
"For any correspondence, please contact Yansong Feng.",
"1986.",
"Statistics and causal inference.",
"Journal of the American statistical Association , 81(396):945960.",
"Karen Sparck Jones.",
"1972.",
"A statistical interpretation of term specificity and its application in retrieval.",
"Journal of documentation .",
"Katherine A Keith, David Jensen, and Brendan O'Connor.",
"2020.",
"Text and causal inference: A review of using text to remove confounding from causal estimates.",
"arXiv preprint arXiv:2005.00649 .",
"Emre Kiciman and Amit Sharma.",
"2018.",
"Tutorial on causal inference and counterfactual reasoning.",
"In ACM KDD International Conference on Knowledge Discovery and Data Mining .",
"Virgile Landeiro and Aron Culotta.",
"2016.",
"Robust text classification in the presence of confounding bias.",
"In Thirtieth AAAI Conference on Artificial Intelligence .",
"Tao Lei et al. 2017.",
"Interpretable neural models for natural language processing .",
"Ph.D. thesis, Massachusetts Institute of Technology.",
"Wei Lin and Shaokun Zou.",
"2020.",
"The Bluebook on the Big Data of Criminal Justice .",
"Peking University Press."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"result",
"abstain",
"objective",
"method",
"result",
"objective",
"objective",
"result",
"other",
"objective",
"objective",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Natural language processing (NLP) algorithms have become very successful, but they still struggle when applied to out-of-distribution examples.",
"In this paper we propose a controllable generation approach in order to deal with this domain adaptation (DA) challenge.",
"Given an input text example, our DoCoGen algorithm generates a domain-counterfactual textual example (D-CON ) that is similar to the original in all aspects, including the task label, but its domain is changed to a desired one.",
"Importantly, DoCoGen is trained using only unlabeled examples from multiple domains no NLP task labels or parallel pairs of textual examples and their domain-counterfactuals are required.",
"We show that DoCoGen can generate coherent counterfactuals consisting of multiple sentences.",
"We use the D-CON s generated by DoCoGen to augment a sentiment classifier and a multi-label intent classifier in 20 and 78 DA setups, respectively, where source-domain labeled data is scarce.",
"Our model outperforms strong baselines and improves the accuracy of a state-of-the-art unsupervised DA algorithm.",
"1 1 Introduction Natural Language Processing (NLP) algorithms are constantly improving and reaching significant milestones (Devlin et al., 2019; Raffel et al., 2020; Brown et al., 2020).",
"However, such algorithms rely on the availability of sufficient labeled data and the assumption that the training and test sets are drawn from the same underlying distribution.",
"Unfortunately, these assumptions do not hold in many cases due to the costly and labor-intensive data labeling process and since text may originate from many different domains.",
"As generalization in low resource regimes and beyond the training distribution are still fundamental NLP challenges, Both authors equally contributed to this work.",
"1 Our code and data are available at https://github.",
"NLP algorithms significantly degrade when applied to such scenarios.",
"Domain adaptation (DA) is an established field of research in NLP (Roark and Bacchiani, 2003; Daum III and Marcu, 2006; Reichart and Rap-poport, 2007) that attempts to explicitly address generalization beyond the training distribution (2).",
"DA algorithms are trained on annotated data from source domains to be effectively applied in various target domains.",
"Indeed, DA algorithms have been developed for multiple NLP tasks throughout the last two decades (Blitzer et al., 2006, 2007; Glorot et al., 2011; Rush et al., 2012; Ziser and Reichart, 2017, 2018a,b; Han and Eisenstein, 2019).",
"A natural alternative to costly human annotation would be to automatically generate labeled examples for model training.",
"Doing so may expose the model to additional training examples and better represent the data distribution within and outside the annotated source domains.",
"Unfortunately, generating labeled textual data is challenging (Feng et al., 2021), especially when the available labeled data is scarce.",
"Indeed, labeled data generation has hardly been applied to DA (2).",
"To allow DA through labeled data generation, we present DoCoGen , an algorithm that generates domain-counterfactual textual examples (D-CON s).",
"In order to do that, DoCoGen intervenes on the domain-specific terms of its input example, replacing them with terms that are relevant for its target domain while keeping all other properties fixed, including the task label.",
"Consider the task of sentiment classification (top example in Table 1).",
"When DoCoGen encounters an example from the Kitchen domain (its source domain), it first recognizes the terms related to Kitchen reviews, i.e., knife and solid .",
"Then, it intervenes on these terms, replacing them with text that connects the example to the Electronics domain (its target domain) while keeping the negative sentiment.",
"rithm (Li et al., 2016; Russo et al., 2020) that is trained using a novel unsupervised sentence reconstruction objective.",
"Importantly, it does not require task-annotated data, or parallel pairs of sentences and their D-CON",
"s. A key component of DoCoGen is the domain orientation vector , which guides the model to generate the new text in the desired domain.",
"The parameters of the orientation vectors are learned during the unsupervised training process, allowing the generation model to share information among the various domains it is exposed to.",
"We focus on two low resource scenarios: Unsupervised domain adaptation (UDA) and any domain adaptation (ADA, Ben-David et al. (2021)), with only a handful of labeled examples available from a single source domain.",
"In both UDA and ADA the model is exposed to limited labeled source domain data and to unlabeled data from several domains.",
"However, in UDA the unlabeled domains contain the future target domain to which the model will be applied, while in ADA the model has no access to the target domain during training.",
"To cope with these extreme conditions, we use DoCoGen to enrich the source labeled data with D-CON s from the unlabeled domains.",
"By introducing labeled D-CON s from various domains, we hope to provide the model with a training signal that is less affected by spurious correlations: Correlations between features and the task label which do not hold out-of-domain (OOD) (Veitch et al., 2021).",
"After a brief evaluation of the intrinsic quality of the D-CON s generated by DoCoGen , we evaluate our complete DA pipeline.",
"We focus on two tasks: Binary sentiment classification of reviews and multi-label intent prediction in information-seeking conversations.",
"In both tasks, we follow the UDA and ADA scenarios, for a total of 12 and 8 sentiment setups, respectively, as well as 30 UDA and 48 ADA intent prediction setups.",
"Our results demonstrate the superiority of DoCoGen over strong DA and textual-data augmentation algorithms.",
"Finally, combining DoCoGen with PERL (Ben-David et al., 2020), a SOTA UDA model, yields new SOTA DA accuracy and stability.",
"We first describe research in our DA setups: UDA and ADA.",
"We then continue with the study of counterfactual-based data augmentation, and, fi-nally, we describe research on counterfactual generation methods.",
"Domain Adaptation (DA) The NLP literature contains several DA setups, the most realistic of which is unsupervised domain adaptation (UDA), which assumes the availability of unlabeled data from a source and a target domain, as well as access to labeled data from the source domain (Blitzer et al., 2006).",
"An even more challenging and potentially more realistic setup is the recently proposed any domain adaptation setup (ADA, Ben-David et al. (2021)), which assumes no knowledge of the target domains at training time.",
"There are several approaches to DA, including representation learning (Blitzer et al., 2006; Ziser and Reichart, 2017) and data-centric approaches like instance re-weighting and self-training (Huang et al., 2006; Rotman and Reichart, 2019).",
"Since the rise of deep neural networks (DNNs), most focus in DA research has been directed to deep representation learning approaches (DReL).",
"One line of DReL work employs an input reconstruction objective (Glorot et al., 2011; Chen et al., 2012; Yang and Eisenstein, 2014; Ganin et al., 2016).",
"Another line employs pivot features, which are prominent to the task of interest and common in the source and target domains (Blitzer et al., 2007; Pan et al., 2010; Ziser and Reichart, 2018b; Ben-David et al., 2020; Lekhtman et al., 2021).",
"We deviate from the DReL approach to DA and propose a data-centric methodology.",
"Contrary to the above works, our approach can be applied to both UDA and ADA.",
"Moreover, unlike previous ADA work, which builds upon multi-source DA, our approach can also perform single-source ADA.",
"Counterfactually Augmented Data (CAD) Textual data augmentation (TDA) is a technique for increasing the training dataset without explicitly collecting new examples.",
"This is achieved by adding slightly modified copies of already existing examples (local sampling) or newly created data (global sampling).",
"TDA serves as a solution for insufficient data scenarios and as a technique for improving model robustness (Xie et al., 2020; Ng et al., 2020).",
"There are rule-based and model-based approaches to TDA.",
"Rule-based methods commonly involve insertion, deletion, swap and replacement of specific words (Wei and Zou, 2019), or template-based paraphrasing (Rosenberg et al., 2021).",
"Model-based methods typically utilize a pretrained language model (PLM), e.g., for replacing random words (Kobayashi, 2018; Ng et al., 2020), or generating entirely new examples 7728 Original, Kitchen : A good knife but Quality Control was poor.",
"from a prior data-distribution (Bowman et al., 2016; Russo et al., 2020; Wang et al., 2021).",
"Other model-based methods apply backtranslation (Edunov et al., 2018) or paraphrasing (Kumar et al., 2019) for local sampling.",
"Another approach within local sampling TDA is to change (only) a specific concept that exists in the original example, creating a counterfactual example.",
"Counterfactually-Augmented Data (CAD) is generated by minimally intervening on examples to change their ground-truth label, that is, perturbing only those terms necessary to change the label (Kaushik et al., 2020).",
"CAD is commonly used to improve generalizability (Kaushik et al., 2020; Sen et al., 2021), however empirical results using CAD for OOD generalization have been mixed (Joshi and He, 2021; Khashabi et al., 2020).",
"In this work, we explore a different type of counterfactuals, namely D-CON s, which are the result of intervening only on the example's domain while holding everything else equal, particularly its task label.",
"For sentiment analysis, we may be, for example, interested in revising a negative movie review, making it a negative airline review.",
"In addition, while CAD is mostly generated via a human-in-the-loop process (Kaushik et al., 2020; Khashabi et al., 2020; Sen et al., 2021), our work focuses on automatic counterfactual generation.",
"Counterfactual Generation controllable generation refers to generation of text while controlling for specific attributes (Prabhumoye et al., 2020).",
"The controlled attributes can range from style (e.g., politeness and sentiment) to content (e.g., keywords and entities) and even topic.",
"Keskar et al. (2019) propose to control the generated text by training an LM on datasets annotated with the controlled attributes, and Meister et al. (2020) modify the model's decoding method.",
"Recently, Russo et al. (2020) introduced a global sampling conditional variational autoencoder (VAE), augmenting text while controlling for attributes such as label and verb tense.",
"However, controlling for the task label is challenging in scarce labeled data scenarios (Chen et al., 2021), since generative models require large amounts of labeled data .",
"Counterfactual generation lies at the intersection of controllable generation and causal inference (Feder et al., 2021a).",
"Only few works deal with counterfactual generation, mostly by intervening on the task label.",
"Wu et al. (2021) train a model on textual examples and their manually generated counterfactuals.",
"Other works present methods for controlling for the text domain and semantics (Wang et al., 2020; Feng et al., 2019), yet they all experiment with short texts, while our model can generate longer texts, consisting of multiple sentences.",
"A recent work by Yu et al. (2021) focuses on generation of new target-domain examples for aspect-based sentiment analysis (ABSA) (Pon-tiki et al., 2016).",
"However, this method is designed specifically for ABSA, utilizing predefined knowledge, and is only suitable for UDA setups where source domain labeled data is abundant.",
"Our work presents a novel domain counterfactual generation algorithm, which can be trained in an unsupervised manner, and its generated outputs are demonstrated to be effective in multiple low-resource DA tasks.",
"In this section, we formally define the concept of domain-counterfactual textual examples (D-CON s) and discuss the motivation behind them.",
"Definition x (cid:48) is a domain-counterfactual example (D-CON ) of x if it is a coherent human-like text that is a result of intervening on the domain of x and changing it to another domain, while holding everything else equal.",
"Particularly, we would like the task label of x (cid:48) and x to be identical.",
"Formally, given an example ( x, y ) D and a destination domain D (cid:48) , the goal of D-CON generation is to generate x (cid:48) PD (cid:48) ( X | Y = y ) such that x (cid:48) (cid:39) D (cid:48) x , where (cid:39) D (cid:48) is the domain counterfactual operator.",
"In this work, given a labeled source example x we aim to generate coherent human-like D-CON s from the unlabeled domains (see 1).",
"We propose a D-CON generation algorithm, DoCoGen , consisting of two components.",
"The first involves masking domain specific terms of the given example, yielding M ( x ) .",
"The second is a controllable generation model G which takes as input M ( x ) and a domain orientation vector v (cid:48) .",
"This vector specifies the destination domain D (cid:48) , controlling the semantics of the generated D-CON .",
"Formally: DoCoGen ( x, D (cid:48) ) = G ( M ( x ) , v (cid:48) ) (cid:39) D (cid:48) x Motivation The NLP community has recently become increasingly concerned with spurious correlations (Geirhos et al., 2020; Wang and Culotta, 2020; Gardner et al., 2021).",
"In the case of DA, spurious correlations may be defined as correlations between X and Y which are relevant only to a specific domain or in a certain sample of labeled examples.",
"Such correlations may make a predictor f : X Y brittle to domain shifts.",
"Using counterfactuals w.r.t. a specific variable allows us to both estimate its effect on our predictor (Feder et al., 2021b; Rosenberg et al., 2021) or alleviate its impact on it (Kaushik et al., 2021).",
"We focus on the latter, automatically generating D-CON s by intervening on the domain variable D .",
"Adding these D-CON s to the training set of a predictor should reduce its reliance on domain-specific information and spurious correlations.",
"From a DA perspective, enriching the training data with D-CON s is motivated by pivot features (2), which are frequent in multiple domains and are prominent for the task.",
"D-CON s preserve language patterns, such as pivots, which are frequent in multiple domains.",
"Consider the middle example in Table 1, pivot words (such as excellent and important ) are preserved in the D-CON , while non-pivots ( intereseting and well-paced ) are replaced due to the domain intervention.",
"Accordingly, a model trained on an example and its D-CON is directed to focus on pivots rather than on non-pivots, consequently generalizing better OOD.",
"We propose a corrupt-and-reconstruct approach for generating D-CON s from given source domain examples (Figure 1).",
"We next extend on these two steps, and describe our filtering mechanism used to disqualify low quality D-CON",
"s. 4.1 Domain Corruption The first step of generating a D-CON is to mask domain specific terms.",
"In order to mask an example x D with a destination domain D (cid:48) , we first mask all uni-grams w with m ( w, D , D (cid:48) ) > , where is a hyperparameter and m is a masking score that is defined later in this section.",
"Then, we mask all the remaining bi-grams (that do not contain a masked uni-gram) according to the same masking threshold .",
"This process is repeated up to tri-gram expressions.",
"The final output of the corruption step is a masked example M ( x ) .",
"In Figure 1, the masking scores of uni-grams and bi-grams appear above the input words.",
"An n-gram is masked if and only if its score is above a = 0 .",
"08 threshold and the scores of its grams are lower.",
"For example, system is not masked although the bi-gram entertainment system has a score above the threshold, since entertainment is masked and the score of system is lower than .",
"Masking Score Let w be an n-gram and D be a domain with n D unlabeled examples.",
"We denote the number of examples from D that contain w by # w |D .",
"By assuming that domains have equal prior probabilities and by using the Bayes' rule, the probability of D given w can be estimated by P ( D = D| W = w ) # w |D + n D , where is a smoothing hyperparameter.",
"We define the affinity of w to D to be: ( w, D ) = P ( D| w ) (cid:18) 1 H ( D | w ) log N (cid:19) where N is the number of unlabeled domains and H ( D | w ) is the entropy of D | w , which is upper 7730 The entertainment system failed twice but the crew reactivated it quick.",
"bounded by log N .",
"Notice that higher H ( D | w ) values indicate that w is not related to any specific domain.",
"Finally, we set the masking score of an n-gram w with an origin domain D and a destination domain D (cid:48) as follows: m ( w, D , D (cid:48) ) = ( w, D ) ( w, D (cid:48) ) Note that m ( w, D , D (cid:48) ) [ 1 , 1] .",
"It can be negative due to the right hand side's subtrahend, which aims to prevent masking n-grams that are related to the destination domain and should appear in the counterfactual, like system in Figure 1.",
"The second step of DoCoGen is a reconstruction step that involves a generative model, based on an encoder-decoder T5 architecture (Raffel et al., 2020).",
"Given a masked example M ( x ) and a destination domain D (cid:48) , we concatenate a domain orientation vector v (cid:48) that represents D (cid:48) with the masked input's embedding vectors.",
"Then, the concatenated matrix is passed as an input to the encoder-decoder model for counterfactual generation, yielding x (cid:48) .",
"We next describe the mechanism behind domain orientation vectors.",
"Domain Orientation Vectors In addition to the T5 embedding matrix (T5 Embeddings in Figure 1), we equip our model with another learnable embedding matrix, containing K N orientation vectors, such that each domain is represented by K different vectors (Orientation Embeddings in Figure 1).",
"We initialize the orientation vectors with the T5 embedding vectors of the domain names and the top K 1 representing words of each domain.",
"The top representing words of domain D are those which reach the highest score of: log (# w |D + 1) ( w, D ) .",
"We use K orientation vectors to allow us generate a heterogeneous set of D-CON s for a given destination domain (see examples in A).",
"We note that although the orientation vectors are initialized with vectors from the T5 embedding matrix, they have a different role and thus are likely to converge to different values during the training process.",
"Training In the spirit of low resource learning, we would like to train DoCoGen in an unsupervised manner, i.e., without access to manually generated D-CON",
"s. Therefore, we use the unlabeled data of our unlabeled domains.",
"For each example x , we provide the model with M ( x ) , the corrupted version of x , and v , the orientation vector of D , and with x as the gold output.",
"The model hence learns to reconstruct x given M ( x ) and v .",
"Notice that the origin and the destination domains are the same, i.e, D = D (cid:48) , and the masking score is m ( w, D , D ) = 0 .",
"Hence, for masking purposes, we randomly choose D (cid:54) = D and plug it as the destination domain in the masking score.",
"We then choose an orientation v for D , by randomly sampling either the domain name or one of its representing words as long as it appears in x .",
"Finally, since the orientation vector parameters are trained as part of the reconstruction objective, we establish the connection between the orientation vector and the semantics of the completed example.",
"Hence, we expect that at inference time examples will be properly transformed into their D-CON",
"s. 7731 Inference Given ( x, D , D (cid:48) ) , we first mask the example to get M ( x ) and select one orientation vector v (cid:48) that represents D (cid:48) .",
"2 Together, the tuple ( M ( x ) , v (cid:48) ) forms the input, and accordingly the model generates a D-CON x (cid:48) (cid:39) D (cid:48) x .",
"To increase the likelihood that x (cid:48) originates from D (cid:48) , we restrict the model to generate only tokens of the original example or tokens that are related to D (cid:48) and meet the condition: max i 1 ,...,N m ( w, D (cid:48) , D i ) > .",
"In order to properly apply DoCoGen within a DA pipeline, we introduce a filtering mechanism that disqualifies low quality D-CON s generated by DoCoGen .",
"Particularly, we train a classifier to predict the domain of the original, human-written unlabeled examples, and use it to remove D-CON s if their predicted domain is not the given destination domain.",
"In addition, we disqualify D-CON s with less than four words or when the word overlap with the original example is lower than 25% .",
"We name DoCoGen when equipped with this filtering mechanism F-DoCoGen .",
"We next assess DoCoGen in terms of its generated D-CON s, ensuring they:",
"(i) belong to the correct domain and label (1, 2), and",
"(ii) are fluent (3, 4).",
"To this end, we collected 20 original reviews, equally distributed among four domains (the A, D, E, and K domains, see 6).",
"We then applied DoCoGen to generate 60 D-CON s, 3 for each of the original reviews (see 6 for the DoCoGen training setup).",
"Finally, we trained the VAE model of Russo et al. (2020) on labeled data (all the labeled data of the A, D, E, and K domains) and applied it to generate five reviews from each of the above four domains, with the same number of positive and negative reviews as in the set of original reviews.",
"We then conducted a crowd-sourcing experiment where five nearly native English speakers rated each example, considering the following evaluation measures: (1) Domain relevance ( D . REL ) whether the topic of the generated text is related to its destination domain; (2) Label preservation ( L . PRES ) what is the label of the generated example (and we report whether the answer was identical to the desired label); (3) Linguistic Acceptability ( ACCPT ) how logical and grammatical the example is (on a 1-5 scale); and (4) Word error rate ( WER ) what is 2 B.3 presents the % of masked tokens in our experiments.",
"the minimum number of word substitutions, deletions, and insertions that have to be performed to make the example logical and grammatical.",
"3 Table 2 reports our results.",
"DoCoGen achieves high ACCPT scores and low WER scores, significantly outperforming its VAE alternative, which is known to struggle with longer texts (Shen et al., 2019; Iqbal and Qureshi, 2020).",
"Interestingly, DoCoGen achieves compatible results to the original reviews, indicating the high quality of its generated texts.",
"Finally, in more than 90% of the cases DoCoGen manages to change the example domain to the desired domain, and in 80% it preserves the original example label.",
"In comparison, only 88% of the original examples were annotated as their gold label.",
"In this subsection we describe our tasks and datasets, as well as the two DA setups which are the focus of this work.",
"A full description of the number of samples in each dataset is found in Table 6.",
"Sentiment Classification We follow a large body of prior DA work, focusing on the task of binary sentiment classification.",
"Specifically, our experiments include six different domains: the four legacy product review domains (Blitzer et al., 2007) Books (B), DVDs (D), Electronic items (E) and Kitchen appliances (K); the challenging airline review dataset (A) (Nguyen, 2015; Ziser and Reichart, 2018b); and the restaurant (R) domain obtained from the Yelp dataset challenge (Zhang et al., 2015).",
"The focus of this work is on low resource DA, and thus we randomly sample 100 labeled examples to form the training set for the following domains: A, D, E, and K. As described in 2, we explore two DA setups, UDA and ADA.",
"For UDA, where the model has 3 We actually asked the annotators to edit the example and then measured the number of edit operations.",
"access to unlabeled target domain data, we experiment with 12 cross-domain setups, including the following domains: A, D, E, and K. For ADA, where unlabeled data from the target domain is not within reach, we experiment with a total of 8 setups, including B and R as target domains, and A, D, E, and K as source domains.",
"Our reported accuracy scores are averaged across 25 different seeds and randomly sampled training and development sets.",
"Multi-label intent prediction Our second task is multi-label intent prediction of utterances from information-seeking conversations.",
"We use the multi-domain MANtIS dataset (Penha et al., 2019), consisting of diverse conversations from the question-answering Stack Exchange portal.",
"The authors provide manually annotated user intent utterances, with eight possible intent labels, such as information request , potential answer and greetings .",
"Since we focus on low resource scenarios, we use only the five most common labels, as the frequency of the other three labels is less than 5%, and in some domains they are completely missing.",
"The MANtIS dataset consists of 14 domains: Apple (AP), DBA (DB), Electronics (EL), Physics (PH), Statistics (ST), askubuntu (UB); DIY (DI), English (EN), Gaming (GA), GIS (GI), Sci-Fi (SC), Security (SE), Travel (TR) and Worldbuilding (WO).",
"We use the first 6 domains as unlabeled domains, randomly sampling train, development and test sets for each.",
"The remaining 8 domains are used as target domains in the ADA setup, resulting in 30 UDA ( 6 5 ) and 48 ADA ( 6 8 ) setups.",
"Following Penha et al. (2019), we use the (Macro) F1-score to measure classifier performances, and, like in the sentiment classification task, our reported results are averaged across 25 different seeds and randomly sampled training sets.",
"DA by Augmentation The DA pipeline includes a T5-based sentiment classifier trained on labeled data from a single source domain and an augmentation model (e.g., DoCoGen ) trained on unlabeled data from four unlabeled domains.",
"We first train DoCoGen on the unlabeled data, and then use it for generating D-CON s that enrich the classifier's training data.",
"For each labeled training example, DoCoGen generates K = 4 D-CON s w.r.t. each unlabeled domain, resulting in a total of 16 D-CON s per example.",
"After training the sentiment classifier on the enriched data, we evaluate it on test examples originating from one of the unlabeled domains (UDA) or one of the unseen domains (ADA).",
"We denote each DA model by the algorithm that was used for enriching its training data.",
"Our main models are DoCoGen and F-DoCoGen , which is equipped with the filtering mechanism.",
"We compare them to three types of models:",
"(a) baseline models, including both baselines for the entire DA pipeline (1,2,5) and alternative augmentation methods (3,4);",
"(b) ablation models (6,7) that use variants of our D-CON generation algorithm where one component is modified, highlighting the importance of our design choices; and",
"(c) an upper-bound generation model that has access to labeled data from the target domains.",
"Unless otherwise stated, all sentiment classifiers use the same architecture, based on a pre-trained T5 model.",
"We next describe the models in each of these groups.",
"Baseline DA Models We experiment with five baselines: (1) No-Domain-Adaptation ( NoDA ), A model that is only trained on the available training data from the source domain in each DA setup; (2) Domain-Adversarial-Neural-Network ( DANN ), A model that integrates the sentiment analysis predictive task with an adversarial domain classifier to learn domain invariant representations (Ganin et al., 2016).",
"This model does not apply augmentation, but instead the unlabeled data is used for training its adversarial component; (3) Easy-Data-Augmentation ( EDA ) , an augmentation method that randomly inserts, swaps, and deletes words or replaces synonyms (Wei and Zou, 2019); (4) Random-masking Random-Reconstructing ( RM-RR ), another basic augmentation method that randomly masks tokens from the input example and then fills the masks with tokens that are chosen by a masked language modeling head, as suggested by (Ng et al., 2020); and (5) PERL , a SOTA model for the UDA setup (Ben-David et al., 2020).",
"Ablation Models We consider two variants of DoCoGen : (6) No-Orientation-Vectors ( No-OV ), a generation model that masks tokens by employing a similar masking mechanism as DoCoGen , and then employing a masked language modeling head to fill the masked tokens (without domain orientation vectors); and (7) Random-Masking with Orientation-Vectors ( RM-OV ), a generation model that randomly masks tokens from the input example and then employs the DoCoGen 's reconstruction mechanism to fill the masks.",
"Upper-Bound We implement an upper-bound model for D-CON augmentation, Oracle-Matching Oracle-Gen ).",
"Unlike all other models in this work, Oracle-Gen has access to target domain labeled data.",
"Thus, given an example from a source domain, Oracle-Gen looks for the most similar example with the same label in the target domain, and adds it to its training data (see B.1).",
"Tables 3 and 4 present sentiment classification accuracy results for the 12 UDA and 8 ADA setups, respectively.",
"Table 5 presents the average intent prediction F1 scores for each source domain, taken across all target domains, in both UDA and ADA.",
"D-CON Generation Impact For sentiment classification, our model, F-DoCoGen , outperforms all baseline models ( NoDA , DANN , EDA , and RM-RR ) in 10 of 12 UDA setups and in all ADA setups, exhibiting average performance gains of 1 .",
"9% and 1 .",
"3% over the best performing baseline model in the UDA ( DANN ) and the ADA ( RM-RR ) setups, respectively.",
"Moreover, DoCoGen without filtering, is also superior to all baselines, reaching average gains of 1 .",
"6% and of 0 .",
"6% across all UDA and ADA setups, respectively.",
"For intent prediction, DoCoGen (without filtering) is the best performing model, outperforming all baselines across all setups, and reaching average gains of 1 .",
"6% and 1 .",
"5% across all UDA and ADA setups, respectively.",
"Since many intent examples are not domain-specific, our filtering mechanism tends to easily remove their DoCoGen generated D-CON",
"s. We believe that this is the reason for the small degradation in F-DoCoGen performance compared to DoCoGen .",
"However, F-DoCoGen still consistently outperforms all baselines.",
"These results highlight the impact of D-CON generation on model robustness in low-resource setups.",
"Finally, our models are also stable: Their std is lower than all baselines (see C.1).",
"Ablation Models The tables further demonstrate that F-DoCoGen outperforms its ablation models ( 6.2), namely No-OV and RM-OV , in 10 of 12 and 7 of 8 UDA and ADA sentiment classification setups, respectively, and the same holds for DoCoGen across all intent prediction setups.",
"Furthermore, in sentiment classification, F-DoCoGen achieves an average error reduction of 11 .",
"2% and 5 .",
"0% in UDA and ADA, respectively, over the strongest ablation model ( RM-OV ), while in intent prediction DoCoGen achieves a reduction of 8% and 7 .",
"6% , in both setups, respectively.",
"Fi-7734 Source AP DB EL PH ST UB AVG Setup UDA ADA UDA ADA UDA ADA UDA ADA UDA ADA UDA ADA UDA ADA NoDA 75 .",
"nally, our results demonstrate the importance of inappropriate D-CON s disqualification, as in the task of sentiment classification, F-DoCoGen outperforms DoCoGen in 8 of 12 UDA setups and in all ADA setups.",
"On the other hand, when non domain-specific examples are frequent, filtering might lead to small performance degradation, as happens in the intent prediction task.",
"Our results hence stress the importance of each of DoCoGen 's algorithmic components, i.e. domain-corruption ( 4.1 F-DoCoGen vs RM-OV ) and oriented-reconstruction ( 4.2 F-DoCoGen vs No-OV ).",
"Complementary Effect with SOTA Models We notice that F-DoCoGen replicates the average performance of PERL (Ben-David et al., 2020), the UDA SOTA, in sentiment classification.",
"However, since PERL is based on a different architecture than the rest of the models (BERT vs T5), the models are not directly comparable.",
"PERL is a pivot-based representation learning method for DA, which applies pre-training on unlabeled target data and is hence relevant only for UDA.",
"Since DoCoGen implements a different approach to DA (D-CON generation), we check for the complementary effect of these models: DoCoGen-PERL first augments the labeled data with D-CON s and then continues with the PERL pipeline.",
"As reported in Table 3, DoCoGen-PERL outperforms PERL in 8 of 12 UDA setups, providing an average improvement of 0 .",
"7% .",
"Furthermore, the average std of DoCoGen-PERL is 2 .",
"1 compared to 3 .",
"6 of PERL (C.1).",
"This stresses the stability of DoCoGen-PERL across these challenging setup (Ziser and Reichart, 2019).",
"Unfortunately, we cannot perform an equivalent comparison in the ADA setup, since its SOTA models (Ben-David et al., 2021; Wright and Augenstein, 2020) employ labeled data from multiple sources.",
"Likewise, since PERL is not designed for multi-label prediction, we could not apply it to intent prediction.",
"To the best of our knowledge, we are the first to effectively perform single-source ADA.",
"Training Size Effect We would next like to understand the effect of D-CON s generated by DoCoGen on classifiers trained with manually labeled training sets of various sizes.",
"Figure 2 shows that the effect of D-CON augmentation vanishes when the unaugmented classifier reaches accuracy above 85% and a performance plateau (visualized as an elbow in the curve).",
"These results support our hypotheses that low-resource DA scenarios may result in a model that latch on spurious domain correlations, impeding its performance.",
"Accordingly, generating D-CON s by intervening on the domain essentially reduces the reliance on domain-specific information and spurious correlations.",
"We presented DoCoGen , a corrupt-and-reconstruct approach for generating domain-counterfactuals (D-CON s) and apply it as a data augmentation method in low-resource DA.",
"We hypothesized that D-CON s may mitigate the reliance on domain-specific features and on spurious correlations and help generalize out of domain.",
"Our augmentation strategy yields robust models that outperform strong baselines across many low-resource DA setups.",
"In future work we would like to further improve the controllable generation quality of DoCoGen , potentially extending it to control for multiple attributes.",
"Moreover, we would like our methodology to address additional NLP tasks and DA setups.",
"We would like to thank the action editor and the reviewers, as well as the members of the IE@Technion NLP group for their valuable feedback and advice.",
"This research was partially funded by an ISF personal grant No. 1625/18."
] | [
"abstain",
"objective",
"method",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"result",
"objective",
"abstain",
"other",
"other"
] |
[
"Text analytics is a useful tool for studying malware behavior and tracking emerging threats.",
"The task of automated malware attribute iden-tification based on cybersecurity texts is very challenging due to a large number of malware attribute labels and a small number of training instances.",
"In this paper, we propose a novel feature learning method to leverage diverse knowledge sources such as small amount of human annotations, unlabeled text and speci-fications about malware attribute labels.",
"Our evaluation has demonstrated the effectiveness of our method over the state-of-the-art malware attribute prediction systems.",
"Securing computer systems has become a necessity for both organizations and individuals, as many cyber attacks result in devastating consequences.",
"An outbreak of the WannaCry ransomware in 2017 affected more than 200,000 computers across 150 countries, with total damages ranging from hundreds of millions to billions of dollars (Berr, 2017).",
"Detection of malware often relies on an understanding of the characteristics of malware behavior.",
"To establish a standard for unambiguously characterizing malware, MAEC (Malware Attribute Enumeration and Characteri-zation), a community-based project organized by MITRE, has specified a set of standard malware attributes (Kirillov et al., 2011).",
"Based on MAEC, the actions of a malware can be categorized by four attributes: ActionName , Capability , StrategicObjectives and TacticalObjectives .",
"ActionName specifies the actions taken by a malware.",
"For example, delete file is a malware action that deletes existing files from affected systems.",
"Capability defines the general capabilities of a malware.",
"For example, anti-removal is a malware capability that prevents itself from being removed from a system.",
"StrategicObjectives and TacticalObjectives are subcategories of Capability to capture more details.",
"For example, a malware can have a StrategicObjective of staging data for exfiltra-tion and a TacticalObjective of moving data to a staging server.",
"In total, MAEC specified 211 ActionNames , 20 Capabilities , 65 StrategicObjectives , and 148 TacticalObjectives .",
"The goal of this research is to automatically assign malware attribute labels based on cybersecurity texts.",
"The task is challenging.",
"The system needs to assign a large number of labels (444 in total).",
"However, it is difficult to obtain sufficient training examples for each label since malware attribute labeling requires extensive cybersecurity knowledge and only domain experts can do this reliably.",
"Given a large number of possible labels and a small number of training examples, typical supervised text classification techniques do not work well.",
"In this work, we focus on incorporating additional knowledge sources to improve feature learning.",
"The main contributions of this work include",
"1. Develop a novel malware attribute prediction system with the state of the art performance to automatically characterize malware behavior based on cybersecurity text.",
"2. Propose a novel Word Annotation Embedding (WAE) algorithm to encode diverse information from heterogeneous knowledge sources such as human annotations, raw texts and MAEC specifications.",
"3. Since WAE generates embeddings for both words and malware attribute labels, we construct high-quality predicting features based on both types of embeddings.",
"Cybersecurity researchers have recently recognized the benefits of leveraging information extracted from security documents.",
"Most prior work utilizing NLP for cybersecurity has focused on privacy policy analysis and Android security (Peng et al., 2012; Pandita et al., 2013; Qu et al., 2014; Slavin et al., 2016; Zhu and Dumitras, 2016).",
"These systems aim to map written policies and the actual permission requests from Android applications and assess the risk level of these applications.",
"Lim et al. (2017) represents the first major effort to apply NLP techniques for general text-based malware behavior analysis.",
"It processes reports by cybersecurity companies (e.g., FireEye, IBM X-Force, Symantec and Trend Micro) on malware or campaigns associated with Advanced Persistent Threat (APT) groups (Blanda and Westcott, 2018) and assign attributes to identified malware actions.",
"They use word unigrams as predicting features.",
"SVM and Naive Bayes are used to build classifiers for attribute label prediction.",
"To extend this effort, SemEval organized a shared task (called SecureNLP) on semantic analysis for cybersecurity texts.",
"It adopted the same dataset and task definitions as (Lim et al., 2017).",
"There are four subtasks in SemEval SecureNLP: (1) identifying sentences containing malware actions from APT reports; (2) identifying Malware Action, Subject of Action, Object of Action and Modifier of Actionin the identified sentences; (3) identifying four relations,Subject-Action, Action-Object, Modifier-Action and Action-Modifier in identified sentences; (4) assigning attribute labels to each identified action based on the MAEC specification.",
"Figure 1 shows an annotated example for these tasks.",
"This paper describes our approach to solve subtask",
"4. The input to our system includes all the sentences identified in subtask 1 with additional labels for the entities identified in subtask",
"2. Each training and testing instance used in SemEval SecureNLP only contains a single malware action.",
"Figure 2 shows the high-level system architecture.",
"First, we augment all the raw APT reports with annotations to encode the knowledge from both the training data and MAEC.",
"This allows us to design a unified representation for both types of knowledge.",
"The annotated texts are then used by WAE to simultaneously learn embeddings for both words and malware attribute labels.",
"The learned word and attribute label embeddings are then used to construct high qualify prediction features.",
"Finally, we employ supervised machine learning to predict malware attribute labels.",
"We build four classifiers, one for each malware attribute.",
"Each classifier performs n +1-way classification, where n is the number of possible labels for each attribute and 1 is for no value' when the value of an attribute is not conveyed in the text.",
"To annotate text with additional information, the first step is to map each attribute label to a set of keywords based on both MAEC and the human annotations in the training data.",
"Identify Keywords from MAEC: Figure 3 shows a snippet of the MAEC specification.",
"Each malware attribute label in MAEC includes a description and a few keywords.",
"The malware action 004 in Figure 3 has a name emulate driver', a description specify the defined action of emulating an existing driver on a system' and two keywords: driver' and emulate'.",
"Since the keywords carry the most essential information about a malware at-Figure 2: System Architecture Figure 3: A Snippet of the MAEC Specification tribute label, we link each label with these keywords (e.g., ActionName004: driver', emulate').",
"Identify Keywords from Training Data: Since a malware attribute label in the training data is at the sentence level, to extract keywords for each attribute label, we extract all the sentences associated with the same label and consider them one document.",
"To select the most relevant keywords, we only keep those conveying Malware Action, Subject of Action, or Object of Action ( which were identified in subtask 2).",
"Keywords Ranking: For the same attribute label, we merge the keywords from MAEC and those from the training data to form a single document.",
"We then use TF-IDF scores to select the most informative keywords to differentiate these labels.",
"In our experiments, we use the top 25 keywords based on their TF-IDF scores.",
"Text Annotation Generation Finally, for all the APT documents, we annotate the text with malware attribute labels.",
"Specifically, for any word in the APT documents, if it is a keyword associated with K different labels, we annotate the word with K attribute labels.",
"Similar to word embedding (Mikolov et al., 2013), we want to learn features that capture the semantic relations between words.",
"In addition, to facilitate attribute classification, we want to capture the semantic relations between words and their labels.",
"Specifically, we want the words and their attribute labels to be close to each other in the embedding space and the embeddings of different labels to be far away from each other.",
"To achieve the goals, we developed a novel Word Annotation Embedding method.",
"As shown in Figure 4, the target word is used to predict not only its context words but also its labels.",
"To further strengthen their relations, the labels of the target word are also used to predict the target word.",
"Specifically, given a sequence of T words ( W 1 ,.., W t ,..., WT ) and their annotations (( A 1 , 1 ,..., A 1 ,M 1 ) ,..., ( AT, 1 ,..., A T,M T )), the objec-Figure 4: Architecture of WAE Model tive of the WAE model is to maximize the average log probability shown in Equation 1, where C is the size of the context window, W t is the target word, W t + j is a context word, M t is number of annotations W t has and A t,k is the k -th annotation of W t .",
"1 TT (cid:88) t =1 (cid:16) (cid:88) C j C,j (cid:54) =0 log P ( W t + j | W t ) + (cid:88) 0 k M t (log P ( A t,k | W t ) + log P ( W t | A t,k )) (cid:17) (1) Label-aware Negative Sampling: Negative sampling (Mikolov et al., 2013) was introduced as an approximation method in word2vec to improve the efficiency of model training.",
"Previously, negative samples were selected either randomly or based on popularity.",
"The method, however, is insensitive to class labels.",
"Here, we propose a new annotation-aware negative sampling method to (1) keep different annotations apart in the embedding space and (2) to keep words associated with different labels apart, in addition to bringing words and their associated labels closer to each other via positive samples.",
"To achieve this, WAE randomly selects (1) a word as a negative sample if it does not share the same annotations as the target word; (2) an attribute label as a negative sample if it is not the same as the labels of the target word.",
"(S1)",
"W AEW +Sim: Assume the average word embeddings for a given data instance generated by WAE is W E wae , and the malware attribute label Models ActionName Capability StratObj TactObj (B1) (Lim et al., 2017) 0.33 0.41 0.27 0.22 (B2) Word2vec 0.40 0.57 0.41 0.36 (B3) Word2vec+Cap NA NA 0.41 0.39 (S1) WAEW +Sim 0.44 0.61 0.43 0.41 (S2) w.WAE W +Sim 0.46 0.62 0.44 0.43 (S3) WAEW + WAEL +Sim 0.45 0.63 0.46 0.43 (S4) w.WAE W + WAEL +Sim 0.45 0.63 0.45 0.43 (s5) WAEW + WAEL +Sim+Cap NA NA 0.47 0.45 (S6) w.WAE W + WAEL +Sim+Cap NA NA 0.47 0.45 1 : over (Lim et al., 2017) 39% ( p < 0 . 05 ) 54% ( p < 0 . 05 ) 74%( p < 0 . 05 ) 105% ( p < 0 . 05 ) 2 : over word2vec 15% ( p < 0 . 05 ) 11%( p < 0 . 05 ) 15%( p < 0 . 05 ) 25%( p < 0 . 05 ) 3 : over word2vec+Cap NA NA 15% ( p < 0 . 05 ) 15%( p < 0 . 05 ) Table 1: Evaluation Results.",
"embedding learned by WAE is LabelE i for each label i .",
"For each LabelE i , we compute SIM i , which is the cosine similarity between W E wae and LabelE i .",
"W AEW +Sim is the concatenation of W E wae and all the SIM i .",
"For example, to predict ActionName , the model will include 100 word embedding features learned by WAE plus 211 similarity features, one for each ActionNames .",
"(S2) w.W AEW +Sim: This feature set is similar to W AEW +Sim except when computing W E wae , we assign twice as much weight for a word with a label as one without a label.",
"The intuition is words with labels are important keywords based on either MAEC or the training data.",
"(S3)",
"W AEW + W AEL +Sim: It is similar to W AEW +Sim except we also include the average embeddings of attribute labels associated with the instance.",
"(S4) w.W AEW + W AEL +Sim: This is the weighted version of (S3).",
"(S5)",
"W AEW + W AEL +Sim+Cap: Since the label of StrategicObjective and TacticalObjective depends on the label of Capability , we added the capability label in the feature set.",
"We use a 1-hot vector with 20 elements to encode a Capability label.",
"We use the ground truth and the predicted label of Capability during training and testing respectively.",
"(S6) w.W AEW + W AEL +Sim+Cap: This is the weighted version of (S5) 7 Experiments 7.1 Dataset The dataset used in the experiments was provided as a part of the SemEval shared task.",
"It contains 456 APT reports (Blanda and Westcott, 2018), 39 of them were annotated by humans.",
"Among them, 2975 sentences contain malware actions, which are the data instances used in this study.",
"The annotated data are very sparse.",
"Out of the 444 attribute labels, 190 labels do not appear in the labeled data.",
"For the remaining 254 attribute labels, 92 labels occur less than five times, and 50 labels occur only once.",
"In our experiments, we used the raw text in all the APT reports to train WAE.",
"There are 16423 unique tokens and a total of 2544645 tokens in the dataset.",
"We trained both word2vec and WAE with context window size 5 and 100 dimension vectors.",
"The 39 annotated documents were divided into a training set (23 documents), a validation set (8 documents) and a test set (8 docu-ments).",
"Only the training dataset was used to generate annotations for WAE.",
"For classification, we tried both SVM and neural network-based models such as multilayer percep-tron.",
"After experimenting with different model parameters, we found that the best SVM model with a linear kernel performed slightly better than the best neural network models.",
"We speculate that this might be because SVM is less likely to overfit when the training data are sparse.",
"Table 1 shows the average F-scores over 5 runs by the SVM models on the test data.",
"We compare our models with three baseline systems: (B1) (Lim et al., 2017), (B2) word2vec and (B3) word2vec + cap.",
"Among them, (B1) represents the best published results on the same dataset.",
"(B2) and (B3) are all the comparable models with embeddings learned by word2vec.",
"As shown in Table 1, all our models outperformed all the baseline systems.",
"The improvement over the word2vec model is 15%, 11%, 15% and 25% respectively, and, the improvement over (Lim et al., 2017), a previous state of the art, is 39%, 54%, 74% and 105% respectively.",
"The improvement over W ord 2 vec + Cap is 15% and 15% respectively for StrategicObjective and TechnicalObjective .",
"We also conducted t-tests to verify the significance of the improvements.",
"The t-test results confirmed that our models significantly outperformed the baseline models with p < 0.05.",
"Moreover, the value of Capability seems to help the prediction of StrategicObjective and TechnicalObjective .",
"In this paper, we present a novel method to predict malware attribute labels from cybersecurity text.",
"Given a large number of attribute labels and limited training data, we propose a new feature learning method to incorporate knowledge from diverse knowledge sources such as raw text, MAEC spec-ifications and human annotations.",
"We tested our system using the SemEval shared task data and our evaluation demonstrates that the features learned by our models are much more effective than an existing state of the art as well as embedding features learned by word2vec.",
"Our investigation has highlighted the importance of incorporating diverse knowledge sources in complex classification tasks when human annotations are sparse."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"objective",
"objective",
"result"
] |
[
"The few-shot natural language understanding (NLU) task has attracted much recent attention.",
"However, prior methods have been evaluated under a disparate set of protocols, which hinders fair comparison and measuring progress of the field.",
"To address this issue, we introduce an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability.",
"Under this new evaluation framework, we re-evaluate several state-of-the-art few-shot methods for NLU tasks.",
"Our framework reveals new insights: (1) both the absolute performance and relative gap of the methods were not accurately estimated in prior literature; (2) no single method dominates most tasks with consistent performance; (3) improvements of some methods diminish with a larger pretrained model; and (4) gains from different methods are often complementary and the best combined model performs close to a strong fully-supervised baseline.",
"We open-source our toolkit, FewNLU, that implements our evaluation framework along with a number of state-of-the-art methods.",
"1 2 1 Introduction Few-shot learning for natural language understanding (NLU) has been significantly advanced by pretrained language models (PLMs; Brown et al., 2020; Schick and Schtze, 2021a,b).",
"With the goal of learning a new task with very few (usually less than a hundred) samples, few-shot learning benefits from the prior knowledge stored in PLMs.",
"Various few-shot methods based on PLMs and prompting have been proposed (Liu et al., 2021b; Menon et al., 2021; Gao et al., 2020).",
"The authors have contributed equally to this work.",
"Corresponding Authors.",
"1 Leaderboard: https://fewnlu.github.io 2 Code available at https://github.com/THUDM/ FewNLU Although the research of few-shot NLU is developing rapidly, the lack of a standard evaluation protocol has become an obstacle hindering fair comparison between various methods on a common ground and measuring progress of the field.",
"While some works (Schick and Schtze, 2021b; Menon et al., 2021) experimented with a fixed set of hyper-parameters, prior work (Perez et al., 2021; Zhang et al., 2020) noted that such a setting might be exposed to the risk of overestimation .",
"3 Other works (Liu et al., 2021b; Gao et al., 2020; Perez et al., 2021) proposed to use a small development set to select hyper-parameters, but their evaluation protocols vary in a few key aspects (e.g., how to construct data splits), which in fact lead to large differences as we will show (in Section 4.2).",
"The above phenomena highlight the need for a common protocol for the evaluation of few-shot NLU methods.",
"However, the fact that few-shot learning is extremely sensitive to subtle variations of many factors (Dodge et al., 2020; Gao et al., 2020) poses challenges for designing a solid evaluation protocol.",
"In this work, aiming at addressing the aforementioned challenge, we propose an evaluation framework for few-shot NLU.",
"The evaluation framework consists of a repeated procedureselecting a hyper-parameter, selecting a data split, training and evaluating the model.",
"To set up a solid evaluation framework, it is crucial to specify a key design choicehow to construct data splits for model selection.",
"We conduct a comprehensive set of experiments to answer the question.",
"Specifically, we propose a Multi-Splits strategy, which randomly splits the available labeled samples into training and development sets multiple times, followed by aggregating the results from each data split.",
"We show that this simple strategy outperforms several 3 This is because the fixed hyper-parameters are selected according to practical considerations, which are informed by the test set performance from previous evaluations.",
"baseline strategies in three dimensions: (1) the test set performance of the selected hyper-parameters; (2) correlation between development set and true test set performance; and (3) robustness to hyper-parameter settings.",
"We then take a step further to re-evaluate recent state-of-the-art few-shot NLU methods under this common evaluation framework.",
"Our re-evaluation leads to several key findings summarized in Section 2. To aid reproducing our results and benchmarking few-shot NLU methods, we open-source FewNLU, a toolkit that contains implementations of a number of state-of-the-art methods, data processing utilities, as well as our proposed evaluation framework.",
"To sum up, our contributions are as follows.",
"1. We introduce a new evaluation framework of few-shot NLU.",
"We propose three desiderata of few-shot evaluation and show that our framework outperforms previous ones in these aspects.",
"Thus our framework allows for more reliable comparison of few-shot NLU methods.",
"2. Under the new evaluation framework, we benchmark the performance of recent methods individually as well as the best performance with a combined approach.",
"These benchmarks reflect the current state of the art and will serve as important baselines for future research.",
"3. Throughout our exploration, we arrive at several key findings summarized in Section 2. 4. We open-source a toolkit, FewNLU, to facilitate future research with our framework.",
"Finding 1. Our proposed Multi-Splits is a more reliable data-split strategy than several baselines with improvements in (1) test performance, (2) correlation between development and test sets, and (3) stability w.r.t. the number of runs.",
"Finding 2. The absolute performance and the relative gap of few-shot methods were in general not accurately estimated in prior literature.",
"It highlights the importance of evaluation for obtaining reliable conclusions.",
"Moreover, the benefits of some few-shot methods (e.g., ADAPET) decrease on larger pretrained models.",
"Finding 4. No single few-shot method dominates most NLU tasks.",
"This highlights the need for the development of few-shot methods with more consistent and robust performance across tasks.",
"The pretraining-finetuning paradigm (Howard and Ruder, 2018) shows tremendous success in few-shot NLU tasks.",
"Various methods have been developed such as [CLS] classification (Devlin et al., 2018), prompting-based methods with discrete prompts (Schick and Schtze, 2021b; Gao et al., 2020) or continuous prompts (Liu et al., 2021b; Shin et al., 2020; Li and Liang, 2021; Lester et al., 2021), and methods that calibrate the output distribution (Yang et al., 2021; Zhao et al., 2021).",
"The fact that few-shot learning is sensitive to many factors and thus is extremely unstable (Liu et al., 2021a; Lu et al., 2021; Zhang et al., 2020; Dodge et al., 2020) increases the difficulty of few-shot evaluation.",
"Several works address evaluation protocols to mitigate the effects of instability: Gao et al. (2020) and Liu et al. (2021b) adopt a held-out set to select models.",
"Perez et al. (2021) proposed K -fold cross-validation and minimum description length evaluation strategies.",
"Our work differs from these works on few-shot evaluation in several aspects: (1) we propose three metrics to evaluate data split strategies; (2) while most prior work proposed evaluation protocols without justification, we conduct comprehensive experiments to support our key design choice; (3) we formulate a general evaluation framework; (4) our re-evaluation under the proposed framework leads to several key findings.",
"Though there have been a few existing few-shot NLP benchmarks, our work is quite different in terms of the key issues addressed.",
"FLEX (Bragg et al., 2021) and CrossFit (Ye et al., 2021) studied principles of designing tasks, datasets, and metrics.",
"FewGLUE (Schick and Schtze, 2021b) is a dataset proposed for benchmarking few-shot NLU.",
"CLUES (Mukherjee et al., 2021) pays attention to the unified format, metric, and the gap between human and machine performance.",
"While the aforementioned benchmarks focus on what data to use and how to define the task, our work discussed how to evaluate which aims at establishing a proper evaluation protocol for few-shot NLU methods.",
"Since FewNLU is orthogonal to the 502 aforementioned prior work, it can also be employed on the data and tasks proposed in previous work.",
"Formally, for a few-shot NLU task, we have a small labeled set D label = { ( x i , y i ) } Ni and a large test set D test = { ( x test i , y test i ) } i where N is the number of labeled samples, x i is a text input (consisting of one or multiple pieces), and y i Y is a label.",
"The goal is to finetune a pretrained model with D label to obtain the best performance on D test .",
"An unlabeled set D unlab = { x unlab i } i may additionally be used by semi-supervised few-shot methods (5.1).",
"Our preliminary results (in Appendix A.1) show that using a fixed set of hyper-parameters (Schick and Schtze, 2021a,b) is sub-optimal, and model selection is required.",
"It motivates us to study a more robust evaluation framework for few-shot NLU.",
"The goal of an evaluation framework is twofold: (1) benchmarking few-shot methods for NLU tasks such that they can be fairly compared and evaluated; and (2) obtaining the best few-shot performance in practice.",
"In light of the two aspects, we propose the few-shot evaluation framework in Algorithm 1. The framework searches over a hyper-parameter space H to evaluate a given few-shot method M , obtaining the best hyper-parameter setting h (cid:63) and its test set results.",
"4 The measurement for each h is estimated by performing training and evaluation on multiple data splits (obtained by splitting D label according to a strategy) and reporting their average dev set results.",
"Finally, the method is evaluated on D test using the checkpoints corresponding to h (cid:63) .",
"For benchmarking, we report the average and standard deviation over multiple test set results.",
"Otherwise, that is, to achieve a model with the best practical performance, we re-run on the entire D label with h (cid:63) .",
"The framework requires specifying a key design choicehow to construct the data splits, which we will discuss in 4.2.",
"4 For simplicity and ease of use, we use grid search for searching the hyper-parameter space H and identify critical hyper-parameters to limit its size.",
"More complex search methods such as Bayesian Optimization (Snoek et al., 2012) could be used to search over larger hyper-parameter spaces.",
"We first propose the following three key desiderata for the evaluation of different data split strategies.",
"1. Performance of selected hyper-parameter.",
"A good data split strategy should select a hyper-parameter that can achieve a good test set performance.",
"We use the same metrics as (Schick and Schtze, 2021b), along with standard deviations.",
"2. Correlation between dev and test sets (over a hyper-parameter distribution) .",
"Since a small dev set is used for model selection, it is important for a good strategy to obtain a high correlation between the performances on the small dev set and test set over a distribution of hyper-parameters.",
"We report the Spearman's rank correlation coefficient for measurement.",
"3. Stability w.r.t. number of runs K .",
"The choice of the hyper-parameter K should have small impacts on the above two metrics (i.e., performance and correlation).",
"To analyze the stability w.r.t K , we report the standard deviation over multiple different values of K .",
"Besides, it is desirable to have reduced variance when K increases.",
"Thus we report the above two metrics with different 503 values of K and the standard deviation of test scores over K runs.",
"We consider several data split strategies.",
"Some are proposed by previous work, including K -fold cross validation (CV) (Perez et al., 2021), minimum description length (MDL) (Perez et al., 2021), and bagging (BAG) (Breiman, 1996).",
"We also consider two simple strategies worth exploring, including random sampling (RAND) and model-informed splitting (MI).",
"And we propose a new data split strategy, Multi-Splits (MS).",
"Besides, we also experiment a special case of CV when K equals the number of labeled sample, which is leave-of-out cross validation (LOOCV).",
"Since LOOCV takes much longer time and suffers from efficiency problem, we only experimented on several tasks and left the results in Appendix A.2.4.",
"They all fit into the pipeline of the proposed framework in 4.1: 1. K -fold CV equally partitions D label into K folds.",
"Each time, it uses the k th fold for validation and the other K 1 folds for training.",
"2. MDL assigns half of D label as the joint training data and equally partitions the other half into K folds.",
"Each time, it uses the k th fold for validation, and all its previous k 1 folds together with the joint training data for training.",
"3. Bagging samples N r ( r (0 , 1] is a fixed ratio) examples with replacement from the labeled sample as the training set, leaving samples that do not appear in the train set for validation.",
"4. Random Sampling performs random sampling without replacement from D label twice, respectively sampling N r and N (1 r ) data as the training and development sets.",
"5. Model-Informed Splitting computes representations of each labeled example using a model, and clusters them into two distinct sets, respectively as the training and development sets.",
"5 6. Multi-Splits randomly splits D label into training and development sets using a fixed split ratio r .",
"Essentially, these data split strategies differ in several key aspects.",
"1. For CV and MDL, K controls the number of runs and the split ratio.",
"For Multi-Splits, BAG and RAND, the split ratio is decoupled from K and is controlled by r .",
"For MI, the split ratio and number of runs depend on D label .",
"5 Specifically, we used a BERT-Base model to encode data and take the [CLS] representations.",
"2. They use a different amount of data for training and development sets as Table 1 shows.",
"3. There are cases when CV and MS share the same split ratio.",
"The difference is that MS allows overlap between splits while CV does not.",
"4. BAG allows duplicated training data, while RAND and Multi-Splits do not.",
"The training and development sets do not overlap for BAG and Multi-Splits but overlap for RAND.",
"In the limit, our Multi-Splits is similar to leave-P -out cross-validation (LPOCV; Celisse, 2014) 6 where LPOCV runs (cid:0) NP (cid:1) times ( P is the number of dev set examples) while Multi-Splits runs K times.",
"As K increases, Multi-Splits gradually approaches LPOCV.",
"Since it is impossible to enumerate the large number of possible splits in practice, Multi-Splits can be viewed as a practical version of LPOCV.",
"Compared to the strategy of (Gao et al., 2020) that uses multiple datasets, our Multi-Splits uses multiple data splits for a single dataset.",
"It is thus more practical as in real-world scenarios, it is hard to obtain multiple labeled datasets for a true few-shot problem; otherwise, it could be formulated as a fully-supervised learning problem.",
"The strategy in (Liu et al., 2021b) is a special case of Multi-Splits when K = 1 , which suffers from higher variance.",
"To evaluate different data split strategies, we experiment on the FewGLUE benchmark (Schick and Schtze, 2021b).",
"We evaluate strategies based on the widely used prompt-based few-shot method PET (Schick and Schtze, 2021b) with DeBERTa as the base model.",
"7 We run experiments on the same tasks with the same hyper-parameter space 6 LeaveP -out cross-validation uses P data examples as the development set and the remaining data examples as the training set.",
"This is repeated on all ways to cut the labeled dataset in a development set and a training set.",
"7 We fixed the parameters of DeBERTa's bottom one-third layers due to GPU memory limitations, which did not affect the performance much in our preliminary experiments.",
"to ensure a fair comparison; in this experiment we search learning rate, evaluation ratio, prompt pattern and maximum training step.",
"More experimental details are in Appendix A.2.",
"Table 2, Table 3 and Figure 1 show the main results with 64 labeled samples.",
"It is noteworthy that we also experimented with 32 labeled samples and have observed that varying the number of labeled examples does not affect the following conclusion (see Appendix A.2).",
"Test Performance and Correlation.",
"From both Table 2 and Table 3, we find that Multi-Splits achieves the best average test set performance as well as the best average correlation among all strategies.",
"We analyze them as follows: 8 1. Multi-Splits uses fewer labeled samples for training (i.e., 128) while CV and MDL use more (i.e., 192 and 176).",
"Despite using more training data, both CV and MDL do not perform better.",
"This indicates few-shot performance is limited by not being able to select the best model rather than not having sufficient training data.",
"Both CV and MDL use fewer data for validation (i.e., 64 and 32) than Multi-Splits (i.e., 128), thus leading to poor correlation.",
"2. Although Multi-Splits and BAG use the same number of training data (i.e., 128), there could be duplication in the training set of BAG, making it 8 In the following explanation, the numbers refer to the total training/development data covering K =4 runs.",
"poor in diversity and further leading to lower test performance, compared to Multi-Splits.",
"This indicates diversity of training sets is crucial when constructing few-shot data splits.",
"3. RAND uses similar-sized dev and train sets to BAG and MS but performs worse in test performance.",
"Since there could be overlap between train and dev sets, the model may have memorized data, leading to poor test performance.",
"4. MI constructs very different train and dev sets.",
"Overfitting on one of them and validating on the other pose more challenges for the few-shot method on out-of-distribution tasks.",
"some representative strategies.",
"Both CV and MDL represent strategies whose number of runs are coupled with the size of data split, while Multi-Splits represents strategies that have a fixed ratio and in-dependent K .",
"We observe: (1) Multi-Splits (blue lines) is the most stable in correlation and performance, while other strategies CV and MDL are more sensitive to the choice of K .",
"(2) Multi-Splits shows the smallest variance over multiple runs on both BoolQ and RTE.",
"For COPA, though Multi-Splits shows high variance when K = 2 , the variance becomes smaller with larger K , while CV and MDL suffer from increasing or unstable variance.",
"A possible explanation is that increasing K does not affect the number of training and development examples for Multi-Splits; instead, it increases the confidence of results.",
"An important practical ben-efit of Multi-Splits is that one can always choose to increase K for lower variance.",
"However, for CV and MDL, the sizes of training and development sets are affected by K , where extremely large K value leads to a failure mode and extremely small K leads to unstable results.",
"In practice, it is hard to know which value of K to use a priori.",
"and analysis, we arrive at the following finding.",
"Finding 1. Our proposed Multi-Splits is a more reliable data-split strategy than several baselines with improvements in (1) test performance, (2) correlation between development and test sets, and (3) stability w.r.t. number of runs.",
"Remark Our evaluation framework is better in terms of test performance, dev-test correlation, and stability, which proves it can achieve possible peak performance, reliably select the corresponding hy-perparameters according to dev results without overfitting, and mitigate the effects of randomness to the maximum extent.",
"Therefore, the estimation of our evaluation framework for model performance is more reliable than previous evaluations.",
"We now proceed to re-evaluate state-of-the-art few-shot methods under our evaluation framework with the Multi-Splits strategy.",
"We consider two types: minimal few-shot methods , which only assume access to a small labeled dataset, including Classification (CLS; Devlin et al., 2018), PET (Schick and Schtze, 2021b), ADAPET (Menon et al., 2021), P-tuning (Liu et al., 2021b) and FlipDA (Zhou et al., 2021); and semi-supervised few-shot methods , which allow accessing an additional unlabeled dataset, including PET+MLM (Schick and Schtze, 2021a), iPET (Schick and Schtze, 2021b) and Noisy Student (Xie et al., 2020).",
"The same benchmark datasets, metrics, and hyper-parameter space as in 4.2.3 are used.",
"We use 32 labeled samples for training.",
"We consider two labeling strategies to obtain the pseudo-labels on unlabeled samples used by the semi-supervised methods for self-training, including single-split labeling and cross-split labeling .",
"In the single-split setting (Schick and Schtze, 2021b), pseudo-labels are generated by the models trained on the same data split.",
"In the cross-split setting in our evaluation framework, the pseudo-labels are generated by the models trained on multiple different data splits.",
"More configuration details are in Appendix A.4.",
"Re-Evaluation Results Table 4 shows our reevaluation results.",
"The prompt-based fine-tuning paradigm significantly outperforms the classification fine-tuning on all tasks and on both pretrained models (with an advantage of more than 15 points on average).",
"DeBERTa outperforms ALBERT consistently.",
"We observe significant differences in performance between different prompt-based minimal few-shot methods with ALBERT (e.g., ADAPET and FlipDA outperform PET respectively by about 4 points and 2 points on average) while differences with DeBERTa are slight (e.g., PET, ADAPET, P-tuning, and FlipDA have a performance gap of only about 1.0 points on average).",
"In contrast, semi-supervised few-shot methods (i.e., iPET and Noisy) generally improve 12 points on average compared to minimal few-shot methods on both models.",
"Comparison to Prior Evaluations Since we have proved that our evaluation framework is more reliable in estimating method performance as shown in Section 4.2.4, we conduct experiments to compare the estimates by our evaluation framework and prior evaluations to study whether model performance was accurately estimated in prior work.",
"Table 6 lists the absolute performance from prior evaluations and our evaluation.",
"Results show the absolute performance of few-shot methods in prior evaluations was generally overestimated on RTE 506 BaseModels Few-ShotMethods BoolQ RTE WiC CB MultiRC WSC COPA Avg.",
"The RoBERTa (fully-sup.) results by (Liu et al., 2019).",
"RoBERTa-large has less parameters than DeBERTa-xxlarge-v2.",
"Table 4: Re-evaluation of few-shot methods on ALBERT and DeBERTa under our evaluation framework with Multi-Splits strategy on test set of our setup.",
"For iPET and Noisy Student, (cross) and (single) respectively means cross-split labeling and single-split labeling strategies as introduced in 5.2.",
"Our Best (few-shot) is the results achieved by a combination method as introduced in 5.4.",
"Globally best results for each task are in bold.",
"Best results for minimal few-shot methods are underlined.",
"(cid:58)(cid:58)(cid:58) Best (cid:58)(cid:58)(cid:58)(cid:58)(cid:58) results (cid:58)(cid:58)(cid:58) for (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) semi-supervised (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) few-shot (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) methods are marked with wavelines.",
"and COPA.",
"Similar findings have been highlighted in prior works (Perez et al., 2021; Zhang et al., 2020), and our evaluation framework confirms the findings under a more reliable setup.",
"This results from a more reliable evaluation procedure that emphasizes dev-test correlation to prevent overfitting (discussed in Section 4.2).",
"Besides, the relative gaps between different methods were not accurately estimated by the prior reported numbers.",
"For example, according to the reported results in prior works, ADAPET outperforms P-Tuning on COPA and P-Tuning beats ADAPET on WiC, while our evaluation reveals the opposite.",
"On one hand, this is because prior results were obtained under a less reliable evaluation procedure (discussed in Section 4.2).",
"Deviation in the estimates of absolute performance contributes to inaccuracy in the estimates of relative performance.",
"On the other, prior experiments were not conducted under a shared evaluation procedure.",
"These two factors are corrected by our re-evaluation under the more reliable proposed framework.",
"To sum up, our re-evaluation compares all methods on a common ground, revealing the following: Finding 2. The absolute performance and the relative gap of few-shot methods were in general not accurately estimated in prior literature.",
"This is corrected by our new evaluation framework with improved reliability.",
"It highlights the importance of evaluation for obtaining reliable conclusions.",
"Moreover, the benefits of some few-shot methods (e.g., ADAPET) decrease on larger pretrained models like DeBERTa.",
"We further explore the best few-shot performance by combining various methods, and evaluating under our evaluation framework.",
"For combined options, we consider five minimal few-shot methods (i.e., CLS, PET, ADAPET, P-tuning, and FlipDA), five training paradigms (i.e., single-run, iPET (sin-gle/cross), and Noisy Student (single/cross)), and the addition of a regularized loss (+MLM).",
"We experiment with all possible combinations and report the best for each task.",
"Best (few-shot) in Table 4 achieves the best results on all tasks among all methods.",
"Existing few-shot methods can be practically used in combination.",
"Compared to RoBERTa (fully-sup) (Liu et al., 2019), the performance gap has been further narrowed to 2.89 points on average.",
"9 Compared to DeBERTa (fully-sup), there is still a sizeable gap between few-shot and fully-supervised systems.",
"We list the best-performing combination for each task in Table 5. The best combinations are very different across tasks, and there is no single method that dominates most tasks.",
"PET and ADAPET as well as iPET and Noisy Student are about equally preferred while cross-split labeling and no regularization term perform better.",
"We thus recommend future work to focus on the development of methods that achieve consistent and robust performance across tasks.",
"We summarize the following findings: Finding 3. Gains of different methods are largely complementary.",
"A combination of methods largely outperforms individual methods, performing close to a strong fully-supervised baseline on RoBERTa.",
"However, there is still a sizeable gap between the best few-shot and the fully-supervised system.",
"Finding 4. No single few-shot method dominates most NLU tasks.",
"This highlights the need for the development of few-shot methods with more consistent and robust performance across tasks.",
"We open-source FewNLU, an integrated toolkit designed for few-shot NLU.",
"It contains implemen-9 Note that the gap could be larger since RoBERTa-Large has a smaller number of parameters than DeBERTa, and RoBERTa (fully-sup) does not incorporate additional beneficial techniques such as ensembling or self-training.",
"tations of state-of-the-art methods, data processing utilities, a standardized few-shot training framework, and most importantly, our proposed evaluation framework.",
"Figure 2 shows the architecture.",
"We hope FewNLU could facilitate benchmarking few-shot learning methods for NLU tasks and ex-pendit the research in this field.",
"We introduce an evaluation framework, re-evaluate a number of few-shot learning methods under the evaluation framework with a novel Multi-Splits strategy, and release a few-shot toolkit.",
"Apart from this, we also aim at advancing the development of few-shot learning by sharing several new experimental findings.",
"We identify several new directions for future work: (1) In practice, how to define the hyper-parameter search space a priori is a challenge.",
"(2) It is critical for the community to iterate and converge on a common evaluation framework.",
"(3) Few-shot natural language generation might also be studied in a similar framework.",
"We thank Dani Yogatama for valuable feedback on a draft of this paper.",
"Tang is funded by NSFC for Distinguished Young Scholar (61825602).",
"Zheng, Ding, Tang, and Yang are funded by the National Key R&D Program of China (2020AAA0105200) and supported by Beijing Academy of Artificial Intelligence (BAAI).",
"Zheng is Funded by China Postdoctoral Science Foundation (2021M690471).",
"Zhou and Li are supported in part by the National Natural Science Foundation of China Grant 62161146004, Turing AI Institute of Nanjing and Xi'an Institute for Interdisciplinary Information Core Technology."
] | [
"abstain",
"abstain",
"result",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"In the field of dialogue summarization, due to the lack of training data, it is often difficult for supervised summary generation methods to learn vital information from dialogue context.",
"Several works on unsupervised summarization for document by leveraging semantic information solely or auto-encoder strategy (i.e., sentence compression), they however cannot be adapted to the dialogue scene due to the limited words in utterances and huge gap between the dialogue and its summary.",
"In this study, we propose a novel unsupervised strategy to address this challenge, which roots from the hypothetical foundation that a superior summary approximates a replacement of the original dialogue, and they are roughly equivalent for auxiliary (self-supervised) tasks, e.g., dialogue generation.",
"The proposed strategy RepSum is applied to generate both extractive and abstractive summary with the guidance of the followed n th utterance generation and classification tasks.",
"Extensive experiments on various datasets demonstrate the superiority of the proposed model compared with other unsupervised methods.",
"Dialogue summarization distills key information from a dialogue context and synopsizes it into a concise summary.",
"As a novel topic of critical importance, it offers powerful potentials for a number of scenarios, e.g, the court debate in civil trial, the customer service calls arisen from agent(s) and customer, the business meeting engaged with multi-members.",
"It also assists users in quick access and consumes the essential content in the dialogue.",
"Major attempts on dialogue summarization are template-based (Wang and Cardie, 2013; Oya et al., 2014) in the primitive stage by extracting key information and filling it into the learned templates.",
"However, these template-based techniques limit the Figure 1: A summary is generated from the input dialogue firstly, and then the original dialogue and its corresponding summary are exploited for n th utterance prediction, respectively.",
"scope of their applications and cannot be adapted to a wider range of conversational data since their input structure is predefined and the learned templates are domain-specific.",
"Later, various works explore the assistance from labeled auxiliary information for summary generation, by leveraging either dialogue act (Goo and Chen, 2018), or key point sequence (Liu et al., 2019).",
"The former predicts the dialogue act label of each utterance as explicit interactive signals, while the latter attempts to learn the logic of the summary via key point sequence.",
"Recently, Ganesh and Dingliwal (2019) converts the dialogue into a document by aptly capturing discourse relations which proves to be effective under the scenario of document summarization.",
"While prior deep content generation methods rely on large amounts of annotated data, they are rarely available for dialogue summarization due to the prohibitive costs of labeled data.",
"A straightforward way to alleviate the dependency of the annotated data is to apply the existing unsupervised methods designed for document summarization (Rossiello et al., 2017; Zheng and Lapata, 2019; Baziotis et al., 2019; Chu and Liu, 2019) to the dialogue scene.",
"However, we argue that these methods accompany weakness either in extractive or in abstractive dialogue summarization.",
"In terms of extractive methods, they mainly rely on semantic information without any supervision signals.",
"As a result, they are ragged in effects due to the limited words in dialogue utterances.",
"As for abstractive approaches, they are commonly designed with an auto-encoder (AE) where the latent variable decodes to a summary which attempts to reconstruct the original input representation.",
"Hence, they are constrained to the small gap between the input text and the target summary (e.g., sentence compression) while failing to reconstruct long input text (e.g., dialogue).",
"In this paper, we propose an innovative unsupervised strategy, dubbed RepSum, which can be applied to both extractive and abstractive summarization.",
"The key intuition is derived from the evaluation methods of extrinsic summarization (Mani, 2001), which testifies the impact of summarization based on how it affects the completion of some other tasks, such as information retrieval, relevance assessment, reading comprehension, etc.",
"We claim that a superior summary can offer a semantic replacement of the original dialogue, which provides equivalent information for completing auxiliary tasks , e.g., dialogue generation, as shown in figure",
"1. Specifically, we propose two auxiliary tasks which are n th utterance generation and n th utterance selection from K candidates based on the previous contents.",
"Both the dialogue and the summary aim to achieve decent performances on the specific task respectively.",
"Besides, we introduce KL divergence to curtail the difference between results based on the dialogue and the summary.",
"This strategy provides the summarization with essential self-supervised signals via auxiliary tasks.",
"Furthermore, it decouples the training from the reconstruction of AE, which enables to support longer text or dialogue to be effectively summarized.",
"Our main contributions are as follows: We propose RepSum, an unsupervised (or self-supervised) strategy for dialogue summarization, which roots from the hypothesis that a superior summary approximates a replacement of the original dialogue for completing other tasks.",
"It leverages several intrinsic self-supervised signals.",
"Based on the RepSum strategy, we propose the corresponding model and employ it to both extractive and abstractive summarization.",
"The extensive experiments with multiple dialogue datasets demonstrate the superiority of the proposed model over several unsupervised approaches.",
"Dialogue Summarization extracts significant information from dialogues.",
"Most of the initial works adopted extractive-based methods.",
"For instance, Bui et al. (2009) produced multiple short fragments from utterances and then selected the parse of the summary by SVM combined with semantic-similarity features.",
"Later, (Oya et al., 2014; Wang and Cardie, 2013) induced abstractive generation templates for constructing candidate summary sentences.",
"Moreover, to benefit from the existing technologies for document summarization, Ganesh and Dingliwal (2019) converted the conversation into a text document through discourse relations and lexical information and then created summaries via pointer-generator (See et al., 2017).",
"However, given that dialogues are different from documents in terms of interactive patterns, most researchers explored to summarize the dialogue by leveraging auxiliary information hidden in the utterances.",
"For example, Goo and Chen (2018) proposed to utilize dialogue act as an auxiliary supervised signal and design a sentence-gated mechanism for modeling the relationships between dialogue acts and the summary.",
"In addition, Liu et al. (2019) predicted the keypoint sequence first and then use it to guide the summary prediction.",
"In contrast to the supervision works, we focus on the unsupervised dialogue summarization considering the high cost and limitation of the labeled data in the dialogue scene.",
"Additionally, our proposed strategy is applicable to both extractive and abstractive models without using any outer information (e.g., template, dialogue acts, and keypoint) but leveraging its intrinsic self-supervised nature.",
"Unsupervised Summarization Historically, unsupervised summarization focused on extracting utterances directly.",
"For example, LEAD chooses the first several utterances and TextRank (Mihal-cea and Tarau, 2004) ranks utterances by running a graph-based algorithm, where each node represents an utterance and the weight between any two nodes is calculated by the semantic similarity.",
"Later, Rossiello et al. (2017) proposed a centroid-based method for text summarization that exploits the compositional capabilities of word embeddings.",
"Zheng and Lapata (2019) improved it by building graphs with directed edges considering the relative positions of any two sentences which contributes to their respective centrality.",
"In recent works, the task of unsupervised summarization is framed as a self-supervised auto-encoder problem, namely sentence compression.",
"Miao and Blunsom (2016); Baziotis et al. (2019); Chu and Liu (2019) applied the auto-encoder framework, where the expected abstract is set to the latent variables from which the input sentence is reconstructed.",
"Fevry and Phang (2018) added noise to extend sentences and trained a denoising auto-encoder to recover the input text.",
"Brazinskas et al. (2020) introduced a hierarchical variational auto-encoder to associate the individual reviews with stochastic latent codes for opinion summarization.",
"Recently, another line of works focused on edit-distance-based approaches.",
"West et al. (2019) summarized by applying the Information Bottleneck principle to the objective of conditional language modeling.",
"In addition, Zhou and Rush (2019); Schumann et al. (2020) summarized by hill climbing with word-level extraction, which searches the text for a high-scoring summary by discrete optimization.",
"Compared to these works, to the best of our knowledge, our model is one of the pioneers attempting unsupervised dialogue summarization.",
"To improve the effectiveness, we devise a generalized strategy RepSum that incentivizes the summary to complete the auxiliary tasks as the original dialogue does, thus providing self-training signals and in turn enabling long texts to be summarized.",
"RepSum roots from the hypothetical foundation that a superior summary approximates a replace-Figure",
"replace-Figure 2: The overall flow chart of the proposed model.",
"The middle square is the unsupervised dialogue summarization generation process.",
"Further, both the dialogue and the corresponding summary are employed on auxiliary tasks (i.e., n th utterance generation and classifi-cation).",
"The innovation lies to a superior summary is the replacement of the original dialogue.",
"ment of the original dialogue, and they are roughly equivalent for completing auxiliary (self-supervised) tasks.",
"Figure 2 shows the flow chart of the introduced replacement strategy.",
"Specifically, the summary generation module aims at generating a summary from the original dialogue.",
"During this generation process, two auxiliary tasks, n th utterance generation and n th utterance classification, are constructed to transform unsupervised dialogue summarization task into self-supervised mode by learning through auxiliary tasks.",
"Furthermore, we apply RepSum to extractive and abstractive summarization, experiments verify its effectiveness in an empirical point of view.",
"As introduced above, we leverage two auxiliary tasks to act as self-supervised signals to assist the generation process of a superior summary.",
"Given that the summary is the replacement of the original dialogue, the input dialogue and the generated summary are expected to achieve similar results on these tasks respectively.",
"Hence, we add the KullbackLeibler (KL) divergence to curtail the differences between the results of each auxiliary task based on the input dialogue and the generated summary.",
"The details are denoted as follows: Task1: Generation (TG) aims at generating the n th utterance.",
"We employ the commonly used encoder-decoder structure.",
"The whole dialogue is concatenated and encoded (as a document) by the bi-directional LSTM(Hochreiter and Schmidhuber, 1997) for the sake of fair comparison with other baselines.",
"The representation of each word is the concatenation of the forward and backward LSTM states, i.e., h i = [ h fwdi , h bwdi ] .",
"As for the decoder, we employ a uni-directional LSTM with attention mechanism (Luong et al., 2015).",
"Concretely, the attention distribution a t and the following context vector c t are formulated as: a ti = ( h i W a s t ) , c t = n (cid:88) i =1 a i h i (1) where W a is the learnable parameter and is the softmax function.",
"The context vector and the current decoder state s t are employed for predicting the probability distribution of the output word over all the vocabulary words: p ( y t ) = ( W p ( ( W k [ y t 1 ; s t ; c t ] + b k )) + b p ) (2) where W p , W k , b p and b k are learnable parameters.",
"is the softmax function and is the tanh function.",
"We choose the negative logliklihood as the loss function, and the loss of the utterance generation based on the dialogue via the path of enc dia dec dia (see Figure 2) is denoted as: LTG dia = q (cid:88) t =1 log p ( l t | l <t ; enc dia ) (3) where l = { l 1 , l 2 , ..., l q } is the generated utterance.",
"Similarly, the utterance generation based on the generated summary LTG sum is calculated via the process of enc sum dec sum in Figure",
"2. To guarantee the similar performance of the results based on the original dialogue and the generated summary, we also add KL divergence to curtail the difference between the probability distribution of prediction at each timestep: LTG kl = q (cid:88) t =1 KL ( p ( l t | l <t ; enc dia ) || p ( l t | l <t ; enc sum )) (4) Hence, the loss for the n th utterance generation task is denoted as: LTG = 0 LTG dia + 1 LTG sum + 2 LTG kl (5) where 0 , 1 , and 2 are the weight for each loss.",
"3.3 Unsupervised Summarization The RepSum is employed to both the extractive and abstractive summarization: Extractive Summarization We consider the extractive summarization as a sentence binary classification task as (Nallapati et al., 2017) does, which means R utterances in a dialogue with label one are extracted to be an extractive summary.",
"Specifically, we use enc ext ( enc dia in the Figure 2) applied by the Bi-LSTM to encode utterances in dialogue, and they are represented as hidden states h 1 , h 2 , ..., h n 1 .",
"Then, the representation of the dialogue is the average pooling of the concatenated hidden states of the entire utterances, denoted as: d = ( W d 1 n 1 n 1 (cid:88) i =1 [ h fwdi ; h bwdi ] + b d ) (9) where W d and b d are learnable parameters, and is the tanh function.",
"Task2: Classification (TC) is designed to select the correct n th utterance from the K candidate utterances.",
"Similar to the dialogue encoding in the task TG, we choose the Bi-LSTM as the encoder.",
"The dialogue representation h d is the average of the hidden state of each word.",
"Besides, each candidate is also encoded by the Bi-LSTM and projected to a dense vector by logit layer f , and then concatenated to h d , formulated as [ f ( uc i ); h d ] .",
"The probability of each utterance belonging to the correct answer is calculated by a logistic layer.",
"Furthermore, we use cross-entropy for training via the process of enc dia classifier dia (see Figure 2).",
"The loss based on the dialogue is formulated as: LTC dia = K (cid:88) n =1 z n log z n (6) Similarily, LTC sum based on the generated summary through enc sum classifier sum is calculated.",
"We also use the KL divergence to measure the difference between the results from the dialogue and generated summary: LTC kl = KL ( p ( uc dia ) || p ( uc sum )) (7) where p ( uc dia ) and p ( uc sum ) is the probability distribution on K candidates.",
"For utterances classification, each utterance is concatenated with the dialogue representation d .",
"And a logistic layer predicts the probability belonging to the generated summary, as shown below: p ( u i = 1) = ( W h h i + h i W hd d + b h ) (10) where W h , W hd and b h are learnable parameters, and is the sigmoid function.",
"Later, we choose the top probability R utterances as the extractive summary.",
"After obtaining the initial generated summary, the unsupervised extractive summarization can be guided under the RepSum strategy.",
"Specifi-cally, the extractive-based summary is optimized by the auxiliary tasks for the sake of effective results and similar performance of the dialogue.",
"Hence, the training loss for extractive summarization including n th utterance generation and classification is denoted as: L ext = L extTG + L extTC (11) Abstractive Summarization The abstractive summarization process follows the conventional encoder-decoder structure.",
"For each time step, the word prediction probability is calculated via Eq.",
"2. To generate the abstractive summary used for the auxiliary tasks, we sample each word from the probability (cid:101) y t softmax ( p ( y t )) and encode them as enc a sum ( enc sum in the Figure 2).",
"However, it is a non-differentiable process, which can not be trained directly.",
"Hence, we use the Straight-Through (ST) Gumble Estimator introduced in (Bengio et al., 2013) to solve this problem.",
"During the forward training pass and test process, we use the reparametriza-tion trick as a variance approximation of sampling from the original probability (Maddison et al., 2014).",
"Specifically, sampling word is transformed to take the argmax from a new probability, (cid:101) y is discretized using argmax and sampling as: (cid:101) y t = argmax ( log ( p ( y t )) + g ) , g = log ( log ( )) , U (0 , 1) (12) where g is the Gumble distribution and U is the uniform distribution.",
"As for computing the gradient in the backward pass, we use a continuous and differentiable approximation to argmax : p ( y it ) = exp (( log ( p ( y it )) + g i ) / ) (cid:80) | V | j =1 exp (( log ( p ( y jt )) + g j ) / ) (13) AMI Justice Test Set Size 132 1525 Avg.",
"where | V | is the vocabulary size and the (0 , ) is the temperature parameter.",
"Samples from Gumble Softmax distributions are identical to samples from a categorical distribution as 0 .",
"The input for the encoder enc a sum is denoted as: e absy t = | V | (cid:88) i =1 e ( w i ) p ( y it ) (14) where e ( w i ) is the word embedding of the words.",
"After the acquisition of the abstractive summary, we also employ the RepSum strategy for training.",
"Due to the difficulty of the generation, we supply two more other auxiliary losses.",
"Firstly, the experiments indicate that the model is difficult to converge due to the lack of any guidance for the decoder (see w/o fake-sum in Table 5), we employ the extractive summary as a fake summary for teacher forcing training.",
"Hence, the fake summary generation loss L fs is calculated following the Eq.",
"1, Eq.",
"2 and Eq.",
"3. Moreover, given that abstractive summary is limited to readability and fluency, we pre-train a language model with dialogue utterances to solve this problem.",
"We aim to generate fluent summaries by adding language modeling loss, which approaches the output prediction to language output: L lm = KL ( p ( y t ) || p lm ( y t )) (15) Hence, the training loss for the unsupervised abstractive dialogue summarization is denoted as: L abs = L abs TG + L abs TC + 6 L fs + 7 L lm (16) Parameters 6 and 7 are normalization weight.",
"We evaluate RepSum on a meeting dataset in English AMI and a multi-party court debate dataset in Chinese Justice .",
"The statistics are presented in details (see Tabel 1).",
"The AMI 1 meeting corpus (Carletta et al., 2005) consists of 100 hours of meeting recordings 1 http://groups.inf.ed.ac.uk/ami/corpus/overview.shtml Type Model AMI Justice R-1 R-2 R-L R-1 R-2 R-L Extractive ORACLE 24.57 4.44 15.03 37.28 21.05 32.78 LEAD3 9.15 1.78 5.36 17.69 3.33 11.52 TextRank (Mihalcea and Tarau, 2004) 11.27 0.84 7.19 20.72 6.51 13.56 Centroid (Rossiello et al., 2017) 14.08 2.09 8.19 22.31 6.53 13.66 PacSum(Zheng and Lapata, 2019) 16.15 2.23 9.14 23.36 7.03 14.66 RepSum-Ext (ours) 18.77 2.24 10.80 25.88 8.21 15.97 Abstractive 2g shuf Fevry and Phang (2018) 14.08 2.09 8.18 20.19 4.15 12.08 MeanSum(Chu and Liu, 2019) 16.09 2.30 11.14 21.25 5.54 13.44 SEQ 3 (Baziotis et al., 2019) 17.06 2.23 11.85 22.47 3.88 14.67 RepSum-Abs (ours) 18.88 2.38 15.62 24.23 6.37 15.14 Table 2: Comparison of our mechanism employed in extractive and abstractive summarization with other baseline models.",
"in English.",
"It includes high-quality and manually produced transcription, dialogue acts, topic segmentation, extractive and abstractive summaries, etc.",
"In this work, we use the recording transcripts as the original input and the provided abstractive summary as the expected summary to be generated.",
"Justice.",
"The court debate records consist of 30,000 dispute cases.",
"In the court trial scenario, there are multiple roles (i.e., judge, plaintiff, defendant).",
"In the whole debate dialogue, the plaintiff and the defendant debate on controversy focus leading by the judge.",
"After the trial, the judge summarizes the facts recognized through the trial.",
"Thus we use the court debate transcript as the original input and the fact description as the expected summary.",
"In our experiments 2 , we optimize the proposed model using Adam Optimizer (Kingma and Ba, 2014) with the learning rate of 3e-4.",
"We train on a single TeslaP100 GPU with a batch size of 16.",
"The vocabulary size is 30,000 and embedding dimension for each word is 200.",
"The hidden size is 200 for both encoder and decoder.",
"For gumble softmax, we set the temperature to 0.5.",
"In the auxiliary task C 2 , we denote k as 4, which means we select the other 3 similar utterances.",
"They are chosen from all the utterances in the dataset randomly.",
"For extractive summarization, we pick out the top 3 utterances by their probability.",
"We set the 0 to 7 equals 0.5, 0.5, 5, 1, 1, 2, 1, 0.006 respectively to balance the scale of each module.",
"We firstly report the performance of the ORACLE as an upper bound, which uses a greedy algo-2",
"rithm to extract several utterances to maximize the ROUGE compared with the ground truth.",
"LEAD3 extracts the first three utterances as the summary.",
"As for the extractive-based methods, we compare with classical TextRank (Mihalcea and Tarau, 2004) which converts the dialogue to a weighted-graph where each node represents an utterance and the edge weight expresses the semantic similarity between any two utterances.",
"Centroid (Rossiello et al., 2017) proposes a centroid-based method for text summarization that exploits the compositional capabilities of word embeddings.",
"PacSum (Zheng and Lapata, 2019) improves the TextRank by building graphs with directed edges considering the relative positions of any two sentences contributing to their respective centrality.",
"With regard to the abstractive-based methods, we compare with several auto-encoder based approaches.",
"2g shuf (Fevry and Phang, 2018) adds noise to extend sentences and trains a denoising auto-encoder to recover the original input text.",
"SEQ 3 (Baziotis et al., 2019) constructs a compressor to generate summary and a reconstructor to regenerate input sentence via two chained encoder-decoder pairs.",
"MeanSum (Chu and Liu, 2019) employs the mean of the representations of the input to decode a reasonable summary.",
"Table 2 shows the experimental results based on the AMI and the Justice datasets.",
"ROUGE 3 score (Lin, 2004) is used for evaluation.",
"For extractive summarization, we found the upper bound ORACLE is quite low in dialogue summarization (see the first row in Table 2) compared 3 https://github.com/pltrdy/files2rouge Model AMI Justice Relevance Fluency Relevance Fluency Avg Avg Avg Avg TextRank 0.57 0.51 1.55 0.81 0.69 0.68 1.34 0.76 Centroid 0.88 0.83 1.64 0.80 1.15 0.71 1.42 0.81 PacSum 1.02 0.77 1.67 0.76 1.13 0.66 1.51 0.79 RepSum-Ext 1.17 0.79 1.69 0.81 1.21 0.63 1.54 0.76 2g shuf 0.56 0.78 0.78 0.76 0.71 0.63 0.81 0.81 MeanSum 0.89 0.84 0.89 0.68 0.83 0.61 1.02 0.67 SEQ 3 1.11 0.81 1.03 0.69 1.09 0.59 1.18 0.72 RepSum-Abs 1.23 0.82 1.22 0.72 1.17 0.68 1.20 0.69 Table 3: Human evaluation.",
"with the document summarization where R-1 score usually approaches to 50 as reported in (Liu and Lapata, 2019).",
"It indicates that the dialogue summarization is much more challenging.",
"Additionally AMI dataset is more appropriate for abstractive summarization since its ORACLE scores are much lower than those for Justice dataset.",
"The score of LEAD3 estimates the information distribution over dialogues.",
"Furthermore, our proposed RepSum-Ext is compared with other four state-of-the-art models with significant improvement in Rouge score.",
"Table 2 demonstrates that the RepSum strategy is effective for extractive summarization.",
"For abstractive summarization, we mainly compare RepSum-Abs with AE-based methods.",
"We employ the same encoder and decoder settings for baselines for a fair comparison.",
"In terms of ROUGE value, our model outperforms all the baselines, especially in R-L score.",
"We consider that the auxiliary tasks training mechanism helps to prevent the focus on single-word reconstruction, but aims to remain significant continuous information.",
"In order to ensure the rationality/correctness of the generated summary, we also conducted a human evaluation.",
"The annotators are required to estimate the quality of the generated summaries with respect to the relevance indicating the connection between the dialogue and the summary and fluency representing the readability.",
"The scores are divided into three levels: +2, +1, 0, in which a higher score stands for excellent.",
"We report the average score and coefficient which indicates the consistency of evaluation by different annotators.",
"Specifically, we choose 100 examples for each dataset and six annotators are required to evaluate all the tested methods.",
"The annotators are experienced graduate students who have taken the annotation training before the experiment.",
"Results shown in Table 3 indicate that our proposed strategy is superior to Type Task AMI Justice R-1 R-2 R-L R-1 R-2 R-L Ext.",
"all the baselines.",
"Furthermore, compared to the abstractive-based methods, extractive-based methods perform better on fluency.",
"We consider that the difference is due to sentence integrity.",
"To evaluate the effectiveness of the proposed RepSum strategy, we conduct two ablation studies.",
"We first measure the influence of each auxiliary task (see Table 4).",
"Further, we verify the contribution of each module, shown in Table 5.",
"Table 4 indicates that combining the two auxiliary tasks achieves the best performance on both extractive and abstractive methods.",
"The decline of performance is observed once we remove either task, especially the generation task.",
"We assume that the classification task is considerably straightforward, which may not require affluent semantic information.",
"However, it serves as an auxiliary section with complicated generation tasks.",
"Furthermore, we remove each component to investigate the module effectiveness in RepSum-Abs.",
"The result is shown in Table 5.",
"It indicates that all the components make a positive contribution.",
"To be specific, fake summary (-w/o fake-sum) is the critical point, which contributes to the model convergence.",
"Besides, if we remove tasks based on the generated summary (-w/o sum-task), the performance declines significantly.",
"It proves the assump-tion that a superior summary is supposed to conduct the auxiliary tasks as original dialogue does.",
"Either removing tasks based on the dialogue (-w/o dia-Fake summary R-1 R-2 R-L random 15.45 2.39 10.07 extractive-based 18.88 2.38 15.62 Table 6: Effectiveness of potential fake summary choices for abstractive summarization on the AMI. T AMI Justice R-1 R-2 R-L R-1 R-2 R-L 3 10.38 0.88 7.13 15.71 3.03 9.92 4 19.87 2.20 11.05 22.80 6.33 14.24 5 18.63 1.85 10.73 22.15 5.52 13.63 6 18.65 1.94 10.61 22.67 6.06 13.98 7 18.77 1.84 10.72 22.51 5.87 13.87 Table 7: Effectiveness of candidate numbers in the auxiliary task classification. It is based on the extractive summarization of the AMI and Justice dataset. task) or adding KL divergence (-w/o kl) to control similar effectiveness between dialogue and generated summary, tends to harm the performance.",
"Moreover, we notice that the pre-trained language model (-w/o lm) benefits the bi-gram by noticing the significant decrease in R-2.",
"The extractive-based method is ignored since its components are the same as the abstractive-based approach.",
"Fake Summary Extensive experiments show that abstractive summarization is difficult to converge without word-level guidance.",
"Hence, we propose to construct a fake summary to solve this problem.",
"In this section, we conduct two experiments for different fake summary construction.",
"We first attempt to select T utterances randomly.",
"Further, we choose an extractive summary.",
"Table 6 shows that the random selection result is inferior to extractive summary guidance.",
"Given the consideration of high accuracy, we choose the extractive summary as guidance in this work.",
"However, we assume that random selection can be also employed for efficiency consideration if necessary.",
"Candidates number in TC To further explore the effectiveness of the auxiliary task classification (TC) for unsupervised dialogue summarization, we conduct experiments by varying the candidate's number K .",
"Such number influences the performance of the extractive summarization on both AMI and Justice datasets.",
"We set the number varying from 3 to 7.",
"The performance of our model with the variation of the number K is shown in Table 7.",
"It indicates that the R-1 approaches a stable value with slight fluctuation when we increase the K continuously.",
"Besides, there exists a drastic increase Figure 3: Effectiveness of n th utterance selection in the auxiliary task generation.",
"in R-1 when K is augmented from 2 to",
"3. Hence, given the trade-off between the efficiency and the generation quality, we choose 4 as the number of candidates for all the experiments.",
"Utterance choice in TG The selection of n th utterance for generation in the dialogue is crucial for the model effectiveness.",
"Meaningless utterances such as hmmm, the meeting is over in meeting, and please sign the transcript after checking in court debates may be useless.",
"At the same time, none of the contextual information is integrant.",
"Hence, we conduct experiments to testify the effectiveness of three different utterance selection strategies: Random selects the n th utterance randomly.",
"The utterances before n th are regarded as the input.",
"If the remained dialogue utterances are less than 5, the example is discarded.",
"Last chooses the last utterance of each dialogue for prediction.",
"Moreover, Sec splits the dialogue into several sections and then picks the last utterance of each section.",
"Sec is segmented based on the rule which requires each section to contain at least 8 utterances with at least 5 words and 3 significant utterances whose tf-idf value is superior to the threshold.",
"Figure 3 shows the result conducted on justice dataset 4 .",
"It proves that meaningful utterance benefits the performance.",
"Specifically, Last leads to the worst result on both R-1 and R-L due to the universal utterance at the end of a dialogue.",
"We consider that Random prevents semantic information deficiency through selecting crucial utterances occasionally compared with Sec which achieves the best performance.",
"4 The performance on AMI dataset shows a similar pattern.",
"We only show the visualized result on the justice dataset due to the paper length limitation.",
"This work investigates the problem of unsupervised dialogue summarization.",
"we propose a novel unsupervised strategy RepSum, which roots from the hypothetical foundation that a superior summary approximates a replacement of the original dialogue, and they are roughly equivalent for completing auxiliary tasks.",
"RepSum is employed on both extractive and abstractive-based models via a self-supervision from two auxiliary tasks.",
"Comprehensive experiments on various datasets show the effectiveness of the proposed mechanism compared to the other unsupervised baselines.",
"We sincerely thank Wei Liu, Yu Duan, and Jie Zhou for the helpful discussions.",
"This research was supported by the National Key Research And Development Program of China (2018YFC0830200; 2018YFC0830206; 2020YFC0832505) References Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, and Alexandros Potamianos."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other"
] |
[
"Recent studies have achieved inspiring success in unsupervised grammar induction using masked language modeling (MLM) as the proxy task.",
"Despite their high accuracy in identifying low-level structures, prior arts tend to struggle in capturing high-level structures like clauses, since the MLM task usually only requires information from local context.",
"In this work, we revisit LM-based constituency parsing from a phrase-centered perspective.",
"Inspired by the natural reading process of human readers, we propose to regularize the parser with phrases extracted by an unsupervised phrase tagger to help the LM model quickly manage low-level structures.",
"For a better understanding of high-level structures, we propose a phrase-guided masking strategy for LM to emphasize more on reconstructing non-phrase words.",
"We show that the initial phrase regularization serves as an effective bootstrap, and phrase-guided masking improves the iden-tification of high-level structures.",
"Experiments on the public benchmark with two different backbone models demonstrate the effectiveness and generality of our method.",
"The hierarchical structure of natural language plays a key role in accurate language understanding, but can be unfortunately overlooked when text is treated as a plain sequence.",
"To this end, considerable efforts have been made in integrating structural inductive bias into neural language models (LM) (Shen et al., 2018b; Wang et al., 2019; Shen et al., 2020).",
"Despite different implementations, the general idea is to first apply a parsing module to induce the soft grammar tree of the input text, and then incorporate the induced tree into an encoding model ( e.g. , Transformer (Vaswani et al., 2017)).",
"The model is optimized in an unsupervised manner with masked language modeling (MLM) (Devlin et al., 2019) as a common proxy task.",
"These models have shown inspiring success in inducing meaningful parsing trees without human annotation, but still face two challenging problems.",
"Firstly, the parsing module is randomly initialized at the beginning of the training process.",
"Suboptimal initial parsing accuracy can lead to problematic structural constraints in the encoder model, and further influence the training process and final performance (Gimpel and Smith, 2012).",
"Secondly, the token-level language modeling task encourages the model to focus on local structures, since the reconstruction of a masked word mainly relies on its local context.",
"As a result, the learned model achieves high accuracy in local constituents, like noun phrases (NP), but significantly worse accuracy in high-level, long-distance structures, such as subordinate clauses (SBAR) and prepositional phrases (PP).",
"On the PTB dataset, the most recent structured language model (Shen et al., 2020) still falls behind neural probabilistic context-free grammar models ( e.g. , Kim et al. (2019b)) by over 4% in average SBAR and PP recall.",
"In this work, we revisit the LM-based unsupervised parsing models by providing a phrase-centered perspective .",
"We model the reading process of a sentence in a stylized pipeline: when we try to parse a sentence, instead of handling each individual word , we first recognize the obvious phrases , for instance, names, concepts, slogans, etc.",
"Some phrases are known beforehand, while some are learned from the current context.",
"We then treat each phrase as a complete unit, and only need to fig-ure out the high-level structures that connect these phrases.",
"Following this intuition, we mimic the reading process with a three-stage learning framework.",
"In the first stage, we identify the multigram phrases with the help of an unsupervised phrase tagging model.",
"The extracted phrase set guides the parsing module to quickly manage the pattern of short constituents at the early training stage.",
"The warm-up process does not require any external 6406 Figure 1: Illustration of LM-based unsupervised constituency parsing.",
"resource, and effectively improves and stabilizes the initial parsing accuracy.",
"In the second stage, the model is optimized through the original MLM task.",
"After this stage, the model is good at capturing local structures, as stated above.",
"In the third stage, to push the model out of its comfort zone and force it to learn about high-level structures, we apply a simple and effective phrase-guided masked language modeling task.",
"Specifically, we extract short phrases in the training sentences as the local constituents identified by the model, which are relatively easy cases for the model.",
"We then sample a part of the phrases, and exclude them from the MLM task, so we are basically downsampling intra-phrase words in the reconstruction task, and emphasizing non-phrase words that connect phrases.",
"The proposed method is general and can be applied to arbitrary LM-based parsers in a plug-and-play manner.",
"Contributions.",
"The major contributions of this paper are summarized as follows: (1) We point out the major challenges faced by LM-based unsupervised constituency parsing, and revisit the problem with a phrase-centered perspective; (2) We propose a novel framework with phrase-regularized warmup and phrase-guided mask language modeling, that can be applied to general LM-based parsers for improvement; (3) Experiments on the public benchmark with two different base models demonstrate the effectiveness of our method.",
"Code and data will be published for further research study.",
"In this section, we present our problem formulation and briefly review the general framework of LM-based unsupervised constituency parsing, as illustrated in Figure 1.",
"Parsing as Distance Estimation.",
"Constituency parsing aims to assign an undirected constituency tree to the input sentence, which illustrates how different parts are hierarchically combined in the sentence (Jurafsky, 2000).",
"To enable end-to-end model learning, following prior works (Wang et al., 2019; Shen et al., 2020), the discrete parsing tree is represented as a distance sequence d ( s ) = { d 1 , d 2 , ..., d n 1 } , where d i is the distance score between adjacent words w i and w i +1 , parameterized by model .",
"Given the distance sequence, the tree structure can be induced in a greedy manner: starting from each single token as a leaf constituent, we recursively merge two constituents with the minimum distance score into a large constituent.",
"The tree structure is hence uniquely determined by the relative order of the distance sequence.",
"Figure 1 shows a concrete example of the parse tree induction process from an estimated distance sequence.",
"Our goal is to learn a high-quality distance estimator d from unlabeled text corpus that induces accurate parsing trees.",
"Distance-guided Model Learning.",
"For model learning, the generated distance sequence is injected into an encoding model ( e.g. , Transformer) as structural bias to control information exchange between words.",
"Intuitively, two adjacent words with smaller distance score are more likely to belong to the same constituent, and will exchange more information to each other.",
"The distance estimator d is jointly optimized with the distance-guided encoder from the masked language modeling (MLM) task as a proxy.",
"Formally, given a masking rate and a sentence s = { w 1 , w 2 , ..., w n } , a mask sequence is sampled from uniform Bernoulli sampling, where m i is a binary variable with p ( m i = 1) = .",
"We then get the masked sentence s = { w 1 , ..., w n } by replacing w i with a mask token where m i = 1 .",
"The MLM loss is computed as: (cid:96) mlm ( s ) = (cid:80) w i X mask log p ( w i | s ) | X mask | , where X mask is the set of masked tokens.",
"The encoding model is trained to minimize (cid:96) mlm based on the distance-constrained information aggregation.",
"We will introduce more details about the distance-aware encoders in Section",
"4. 3 Framework Overview In this work, we recognize and examine two major challenges of LM-based grammar induction: 6407 TrainingCorpus Unsupervised Phrase Mining Stage 1: Phrase-regularized Warm-up Initial Phrase Set the longest river the biggest city the fastest car guides Stage 2: Standard Masked LM Training Stage 3: Phrase-guided Masked LM Training Local constituents the longest river in the world What [M] the [M] river in [M] world Distance Estimator What is the longest river in the world Original Sequence Masked Sequence Distance d 6 d 6 d 1 d 1 d 2 d 2 d 3 d 3 d 4 d 4 d 5 d 5 d 7 d 7 Distance-guided Encoder Output TrainingLoss What [M] the [M] river in [M] world Distance Estimator What is the longest river in the world Distance-guided Encoder the world in longest river the is What Currently Induced Parse Tree [C1] [C2] [C3] [C4] [C5] [C6] [C7] What [M] the longest river in [M] world Distance Estimator What is the longest river in the world Distance-guided Encoder sample Sample for Masking MLM Loss Phrase Loss Graph Legends d 6 d 6 d 1 d 1 d 2 d 2 d 3 d 3 d 4 d 4 d 5 d 5 d 7 d 7 d 6 d 6 d 1 d 1 d 2 d 2 d 3 d 3 d 4 d 4 d 5 d 5 d 7 d 7 Figure 2: An overview of the proposed framework.",
"(1) the randomly initialized distance estimator can yield a suboptimal information exchange network in the encoder in the cold start phase, which may further lead to suboptimal parsing accuracy due to error accumulation.",
"(2) the token reconstruction task mainly relies on the aggregation of local information, thus can hardly guide the model to manage high-level structures across long distances.",
"To tackle the challenges, we revisit LM-based unsupervised constituency parsing from a phrase-centered perspective .",
"We propose a three-stage training framework, as shown in Figure 2.",
"In the first stage, we extract an initial phrase set using an off-the-shelf unsupervised phrase tagger.",
"The extracted phrases serve as effective guidance to help warm up the distance estimator to boost its initial accuracy in the cold start phase.",
"The model then gradually gets rid of the help from the initial phrase set and learns about local structures from the original MLM task in the second stage.",
"In the third stage, we try to push the model out of its comfort zone by moving the focus from local structures to high-level structures.",
"We extract a new phrase set from the local constituents identified by the model itself, which consists of easy cases for the model.",
"We then downsample the intra-phrase words for the reconstruction task, and emphasize more on the relatively harder reconstruction of non-phrase words, which connect local constituents into high-level structures.",
"In following sections, we first introduce the base encoding models we experiment with, and then present more details of the proposed framework.",
"Our method can be applied to any encoder with a distance estimator and distance-constrained information aggregation.",
"In this work, we examine our method on two recently developed models, TreeTransformer (Wang et al., 2019) and StructFormer (Shen et al., 2020), as our base models.",
"Both models extend the original Transformer encoder (Vaswani et al., 2017) by adding a structure-aware attention term.",
"Specifically, the original Transformer computes the attention matrix A as A = softmax ( QK (cid:62) d head ) , where a ij A is the attention score between word w i and word w j , Q is the query matrix, K is the key matrix, and d head is the attention head size.",
"The extended attention score in a structure-constrained encoder is written as a (cid:48) ij = q ij a ij , where q ij is the structure-based attention score determined by the distance sequence.",
"The two base encoders differ in their ways to parameterize the distance function d and to define the structure-based attention score q ij .",
"structure-based attention score q ij represents the probability that two words belong to the same constituent, and is defined as",
"Intuitively, words within a closer distance have more information exchange in TreeTransformer.",
"Structformer parameterizes the distance sequence with a Convolutional Neural Network.",
"StructFormer uses a more complicated structure constraint: each constituent has a head word, and information can only be exchanged between the head word and remaining child words in the constituent .",
"The structure-based attention score q ij stands for the probability that w i and w j can exchange information, which means w i is the head word of any constituent containing w j , or vice versa.",
"q ij is jointly determined by the distance sequence and a syntacic height sequence.",
"Ideally, the height of each child word in a constituent should not exceed the boundary distances.",
"More details can be found in the original paper (Shen et al., 2020).",
"To summarize, the distance estimator d determines the attention matrix in the encoder.",
"Through the MLM task, the model learns to optimize d for more effective information aggregation.",
"We then induce the parse tree from the distance sequence generated by d in the parsing process.",
"In following sections, we introduce details about the proposed phrase-regularized warm-up and phrase-guided masked language modeling, which jointly help train a better d .",
"Given a target sentence, we first extract spans that are likely to be phrases.",
"By definition, we seek word sequences that consistently occur consecu-tively in the text, forming a complete semantic unit in certain contexts (Finch, 2016).",
"The extracted phrases are used as additional guidance for the distance estimator at the very beginning of the training process.",
"Specifically, we encourage the distance estimator to assign smaller intra-phrase distances than phrase boundary distances to draw a clear gap on the phrase boundaries.",
"Figure 3 shows a concrete example of intra-phrase and phrase boundary distances.",
"Here we introduce more details about the unsupervised phrase extraction process and phrase regularization for warm-up.",
"Phrase Extraction.",
"Without introducing any exogenous resource, we apply the core phrase mining module of the UCPhrase model (Gu et al., 2021), which does not require any complicated model training.",
"Specifically, within each document D , its core phrase PD is defined as the set of max frequent n-grams in D .",
"For each phrase w i : j = { w i , ..., w j } PD , frequent means it has to occur in the document for at least times.",
"max means there does not exist any super phrase w (cid:48) w i : j in the same document.",
"Such document-level max frequent n-grams are shown to have reasonably high quality and preserve contextual completeness.",
"Uninformative sequences are filtered by a corpus-oriented stopword list generated by TF-IDF ranking.",
"The extracted phrase set serves as effective regularization for the randomly initialized parsing model in early training steps.",
"Note that the phrase extraction module can be replaced by any phrase tagger.",
"Here we show that even phrases extracted by this simple heuristic tagger can bring clear improvement.",
"Phrase Regularization.",
"Given the target sentence s = { w 1 , w 2 , ..., w n } and its initial phrase set P s , we encourage the parser to generate smaller distance scores between intra-phrase words than the distance scores on the phrase boundaries.",
"Formally, we compute the phrase distance loss for each phrase w i : j = { w i , ..., w j } P s as the average margin loss between intra-phrase distance scores and phrase boundary distance scores: (cid:96) phrase ( w i : j ) = 1 | w i : j | j 1 (cid:88) k = i max(0 , d k d i 1 ) + max(0 , d k d j ) 2 .",
"The phrase distance loss for the entire sentence is",
"For StructFormer, we replace the intra-phrase distances into the intra-phrase heights to satisfy its structure constraint as introduced in Section",
"4. The overall loss function at training step t is formed as: (cid:96) ( s ) = (cid:96) mlm ( s ) + t (cid:96) phrase ( s ) , which is basically the original masked language modeling loss (cid:96) mlm regularized by the phrase distance loss (cid:96) phrase with coefficient t .",
"For smooth transition, we apply a step-wise linear coefficient decay.",
"At training step t , we have t = 0 (1 t/T 1 ) , so that we apply full regularization at the very beginning, and then gradually remove the regularization until the model learns completely from the MLM task.",
"In experiments, we set T 1 to the number of steps in one training epoch by default.",
"The masked language modeling task mainly relies on the aggregation of local context information around the masked word.",
"For instance, in the example sentence presented in Figure 2, the prediction of longest mainly depends on its neighbor river .",
"Hence, the parser can quickly manage the structure of short phrases as they are closely related to the optimization proxy.",
"High-level long constituents, however, can hardly be captured in this process.",
"From this perspective, the sentence parsing task can then be divided into two parts: parsing the structures of short phrases, and capturing high-level long structures that connect short phrases.",
"The former can be learned from the intra-phrase word reconstruction task, and the latter depends on the modeling of other non-phrase words.",
"Following this intuition, we propose simple and effective phrase-guided masked language modeling to emphasize the reconstruction of words outside of local constituents.",
"Specifically, we parse the training sentences with the learned model, and treat all local constituents ( e.g. , with fewer than 4 tokens) from the generated parsing trees.",
"Given a sentence with tagged local phrases, we first apply uniform Bernoulli sampling on the phrases with probability p .",
"The sampled phrases are excluded from the MLM task: words inside of the sampled phrases will not be masked.",
"All rest words are sampled for masking with the original masking rate .",
"Formally, given a sentence s with the tagged phrase set P s , the probability of word w i being masked in the MLM task is computed as: P ( m i = 1) = (cid:26) (1 p ) , w i P s , otherwise.",
"By doing so, we try to push the model out of its comfort zone of local structure learning, and encourage it to focus more on how the local constituents are connected.",
"Discussion.",
"Another natural idea to achieve similar intuition is to apply phrase-level reconstruction through whole-phrase masking.",
"Namely, we mask the entire phrase so that the model cannot make prediction merely based on information aggregated through local structures, but can only rely on cross-phrase structures to gather information.",
"We test this intuition in two ways: (1) replace each token in the phrase with a mask token, and apply standard MLM; (2) replace the entire phrase with one mask token, and apply autoregressive phrase reconstruction with a decoder similar to Raffel et al. (2020).",
"Interestingly, results from both implementations show that whole-phrase masking can hurt the accuracy of unsupervised parsing.",
"A possible reason is that reconstructing the entire masked phrase relies on deep semantic knowledge rather than just syntactic structures.",
"We list this finding here and leave it as a potential research problem.",
"Dataset and Evaluation.",
"Following prior studies (Shen et al., 2018b; Wang et al., 2019; Shen et al., 2020), we train all models on the plain text of the PTB corpus (Mikolov et al., 2010) and evaluate them on the WSJ test set (Taylor et al., 2003), in which punctuations are removed.",
"We follow the standard evaluation for unsupervised parsing: given a predicted parsing tree, we fetch all of its subtrees (nested constituents), and compare with those from the gold tree to compute the F1 score.",
"We also report recall scores of the typed constituents in gold trees, including noun (NP), verb (VP), prepositional (PP), adjective (ADJ), adverb (ADV) phrases and subordinate clauses (SBAR).",
"The precision score for each type is not available in the unsupervised setting since the predicted constituents do not have types.",
"Compared Models.",
"Our baseline methods include three major types of unsupervised parsing method.",
"PRPN (Shen et al., 2018a), ON-LSTM (Shen et al., 2018b) and URNNG (Kim et al., 2019c) are recurrent neural network based methods.",
"They are trained by recurrent language modeling loss, where the model is asked to predict the next token given the previous context.",
"C-PCFG (Kim et al., 2019b) and Neural L-PCFGs (Zhu et al., 2020) are neural network augmented methods based on the traditional probabilistic context-free grammar framework, where a set of weighted linguistic rules are learned for tree generation.",
"TreeTransformer (Wang et al., 2019) and StructFormer (Shen et al., 2020) are the backbone models we apply in our study, as introduced in Section",
"4. For our method, we report performances of three variants based on each base model: the performance with phrase-regularized warm-up ( +PRW ), the performance with the phrase-guided masked language modeling ( +PMLM ), and the performance with both ( +PRW+PMLM ).",
"Reproduction Details.",
"We use the published StructFormer and TreeTransformer implementations with their default hyperparameters and optimizers as our backbone models.",
"The learning rate is controlled with a linear scheduler for both models, which starts from the original learning rate, and applies a linear learning rate decay until it reaches 0.0 at the last training step.",
"The initial coefficient 0 for PRW is set to 0.02 for both models.",
"The phrase masking rate p for PMLM is set to 0 .",
"9 .",
"The total number of training steps is fixed, and PMLM is included after 80% of training steps.",
"Training and evaluation are conducted on NVIDIA RTX A6000 GPUs.",
"We report average results from four random seeds ( 1 , 11 , 111 , 1111 ).",
"Results from both backbone models are reproduced in the same machine as variants with our methods for fair comparison.",
"Results from other baseline models are taken from Shen et al. (2020).",
"Table 1 shows average F1 scores for the compared methods on the WSJ test set.",
"Both PRW and PMLM bring improvements in the F1 score.",
"Specifically, PRW increases the F1 score by +1 .",
"1% and +1 .",
"3% on TreeTransformer and StructFormer respectively; PMLM increases the F1 score by +0 .",
"8% and +0 .",
"1% respectively; When applied together, PRW and PMLM bring improvement on F1 score by 1 .",
"4% and 1 .",
"7% respectively.",
"Compared with other parsing models, the enhanced models have very competitive performances.",
"The proposed method helps StructFormer achieve at least comparable F1 score with the state-of-the-art model based on neural linguistic rule learning (C-PCFG).",
"Table 2 provides a more in-depth view of the performance change of each type of constituents.",
"Consistent with our intuition, PRW improves the recall of local constituents like NP, and PMLM improves the recall of compositional constituents like VP, SBA and PP.",
"To our surprise, PRW also brings strong improvement in PP, which means the better accuracy in local structure parsing may have a positive impact on high-level structures as well.",
"StructFormer achieves state-of-the-art PP recall with the help of PRW and PMLM.",
"PRM brings strong performance gain, and we are curious about whether the strength of such enhancement, if any, starts from the initial training steps as our design, and how the strength changes with different masking rates.",
"Intuitively, a larger masking rate may make the initial parsing task even harder, since there is less information available.",
"Figure 4 shows the F1 curves of the base TreeTransformer model and the enhanced variant with PRW under different masking rates.",
"We observe that, PRW always brings significant improvement in the initial parsing performance.",
"Different masking rates do not bring very clear differences in the initial performance of the base model.",
"However, the strength of enhancement from PRW becomes more significant as the masking rate gets higher, which verifies our intuition, that the guidance from the initial phrase set may be more valuable with less information available to the initial parser.",
"To better understand the effectiveness of PRW and PMLM, we conduct case study of the generated parsing trees, as shown in Figure",
"5. Consider the subtree in the green square.",
"The real noun phrase in the ground truth is takeover candidates , while 6412 StructFormer mistakenly merges spotting and takeover first.",
"The model with PRW identifies the correct noun phrase.",
"The improved initialization with phrase regularization does enhance the parser in its ability to identify short phrases.",
"The subtree in the blue square shows an example of high-level constituent structure, where takeovers aren't totally gone forms a clause together with that .",
"StructFormer merges that with takeovers and breaks the clause.",
"The original MLM task mainly focuses on local structures, and may prioritize potential local constituents ( that takeovers can form a noun phrase from a local view).",
"PRW cannot fix this issue, but PMLM helps make the right decision.",
"This verifies our intuition, that PMLM encourages the model to learn about the structure of non-phrase words, and to capture better high-level structures.",
"Limitations.",
"Note that in Figure 5, all models cannot resolve the structure ambiguity between Mario Gabelli an expert and an expert at ... .",
"It indicates that the current unsupervised methods may have little understanding of semantic and commonsense knowledge.",
"Both structures make sense to the model.",
"Weakly-supervised, or knowledge-enhanced learning may alleviate the problem.",
"The study of unsupervised constituency parsing can be traced back to 50 years ago (Booth, 1969; Salo-maa, 1969).",
"We highlight some recent progresses that are closely related to our work: 1) Adding syntactic inductive bias into modern neural network models.",
"ON-LSTM (Shen et al., 2018b) allows hidden neurons to learn long-term or short-term information by a novel gating mechanism and activation function.",
"In URNNG (Kim et al., 2019c), amortized variational inference was applied between a recurrent neural network grammar (RNNG) (Dyer et al., 2016) decoder and a tree structure inference network, which encourages the decoder to generate reasonable tree structures.",
"TreeTransformer (Wang et al., 2019) adds extra locality constraints to the Transformer encoder's self-attention to encourage the attention heads to follow a tree structure such that each token can only attend on nearby neighbors in lower layers and gradually extend the attention field to further tokens when climbing to higher layers.",
"StructFormer (Shen et al., 2020) propose a joint dependency and constituency parser, then uses the dependency adjacency matrix to constraint the self-attention heads in transformer models.",
"2) Using neural network to parameterize linguistic models.",
"The compound PCFG (Kim et al., 2019b) achieves grammar induction by maximizing the marginal likelihood of the sentences which are generated by a probabilistic context-free grammar (PCFG).",
"Neural L-PCFG (Zhu et al., 2020) demonstrated that PCFG can benefit from modeling lexical dependencies.",
"NBL-PCFG (Yang et al., 2021) took a step further by directly modeling bilexical dependencies and reducing both learning and representation complexities of LPCFGs.",
"DIORA (Droz-dov et al., 2019) proposed using inside-outside dynamic programming to compose latent representations from all possible binary trees.",
"The representations of inside and outside passes from the same sentences are optimized to be close to each other.",
"3) Extracting syntactic structure from pretrained language models.",
"Kim et al. (2019a) extract trees from pretrained transformers.",
"Using the model's representations for each word in the sentence, they score fenceposts (positions between words) by computing distance between the two adjacent words.",
"They parse by recursively splitting the tree at the fencepost with the largest distance.",
"4) Leveraging statistic features to identify constituents.",
"Cao et al. (2020) use constituency tests, that specify a set of transformations and use an unsupervised neural acceptability model to make grammaticality decisions.",
"Clark (2001) proposed to identify constituents based on their span statistics, e.g. mutual information between left and right contexts of the span.",
"In this work, we study the role of phrases in language model-based unsupervised constituency parsing.",
"We propose a phrase-centered framework with novel phrase-regularized warm-up and phrase-aware masked language modeling.",
"Experiments with two different base models demonstrate the effectiveness of the proposed methods.",
"Comprehensive case study is conducted for straightforward understanding of the advantages of our model.",
"Although this work mainly focuses on the task of unsupervised parsing, the presented idea and observation can be valuable in more general context.",
"We plan to follow this line of work and further incorporate our method in long-range structured language model learning in the future.",
"Research was supported in part by US DARPA KAIROS Program No.",
"FA8750-19-2-1004, So-cialSim Program No.",
"W911NF-17-C-0099, and INCAS Program No.",
"HR001121C0165, National Science Foundation IIS-19-56151, IIS-17-41317, and IIS 17-04532, IBM-Illinois Discovery Accelerator Institute, and the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897.",
"Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and do not necessarily represent the views, either expressed or implied, of DARPA or the U.S. Government.",
"The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Understanding discourse structures of news articles is vital to effectively contextualize the occurrence of a news event.",
"To enable computational modeling of news structures, we apply an existing theory of functional discourse structure for news articles that revolves around the main event and create a human-annotated corpus of 802 documents spanning over four domains and three media sources.",
"Next, we propose several document-level neural-network models to automatically construct news content structures.",
"Finally, we demonstrate that incorporating system predicted news structures yields new state-of-the-art performance for event coreference resolution.",
"The news documents we annotated are openly available and the annotations are publicly released for future research 1 .",
"Detecting and incorporating discourse structures is important for achieving text-level language understanding.",
"Several well-studied discourse analysis tasks, such as RST (Mann and Thompson, 1988) and PDTB style (Prasad et al., 2008) discourse parsing and text segmentation (Hearst, 1994), generate rhetorical and content structures that have been shown useful for many NLP applications.",
"But these widely applicable discourse structures overlook genre specialties.",
"In this paper, we focus on studying content structures specific to news articles , a broadly studied text genre for many NLP tasks and applications.",
"We believe that genre-specific discourse structures can effectively complement genre independent discourse structures and are essential for achieving deep story-level text understanding.",
"What is in a news article?",
"Normally, we expect a news article to describe well verified facts of newly 1 Dataset can be found at https://github.com/ prafulla77/Discourse_Profiling happened events, aka the main events.",
"However, almost no news article limits itself to reporting only the main events.",
"Most news articles also report context-informing contents, including recent precursor events and current general circumstances, that are meant to directly explain the cause or the context of main events.",
"In addition, they often contain sentences providing further supportive information that is arguably less relevant to main events, comprising of unverifiable or hypothetical anecdotal facts, opinionated statements, future projections and historical backgrounds.",
"Apparently, the relevance order of sentences is not always aligned with their textual order, considering that sentences in a news article are ordered based on their vague importance that is generally determined by multiple factors, including content relevance as well as other factors such as the focus of an article, the author's preferences and writing strategies.",
"While a number of theoretical studies for news discourse exist, little prior effort has been put on computational modeling and automatic construction of news content structures.",
"We introduce a new task and a new annotated text corpus for profiling news discourse structure that categorizes contents of news articles around the main event.",
"The NewsDiscourse corpus consists of 802 news articles (containing 18,155 sentences), sampled from three news sources ( NYT , Xinhua and Reuters ), and covering four domains ( business , crime , disaster and politics ).",
"In this corpus, we label each sentence with one of eight content types reflecting common discourse roles of a sentence in telling a news story, following the news content schemata proposed by Van Dijk (Teun A, 1986; Van Dijk, 1988a,b) with several minor modifications.",
"Next, we present several baselines for automatically identifying the content type of sentences.",
"The experimental results show that a decent performance can be obtained using a basic neural network-based multi-way classification approach.",
"The sentence classification performance can be further improved by modeling interactions between sentences in a document and identifying sentence types in reference to the main event of a document.",
"We envision that the news discourse profiling dataset as well as the learnt computational systems are useful to many discourse level NLP tasks and applications.",
"As an example, we analyze correlations between content structures and event coreference structures in news articles, and conduct experiments to incorporate system predicted sentence content types into an event coreference resolution system.",
"Specifically, we analyze the lifespan and spread of event coreference chains over different content types, and design constraints to capture several prominent observations for event coreference resolution.",
"Experimental results show that news discourse profiling enables consistent performance gains across all the evaluation metrics on two benchmark datasets, improving the previous best performance for the challenging task of event coreference resolution.",
"Several well-studied discourse analysis tasks have been shown useful for many NLP applications.",
"The RST (Mann and Thompson, 1988; Soricut and Marcu, 2003; Feng and Hirst, 2012; Ji and Eisenstein, 2014; Li et al., 2014a; Liu et al., 2019) and PDTB style (Prasad et al., 2008; Pitler and Nenkova, 2009; Lin et al., 2014; Rutherford and Xue, 2016; Qin et al., 2016; Xu et al., 2018) discourse parsing tasks identify discourse units that are logically connected with a predefined set of rhetorical relations, and have been shown useful for a range of NLP applications such as text quality assessment (Lin et al., 2011), sentiment analysis (Bhatia et al., 2015), text summarization (Louis et al., 2010), machine translation (Li et al., 2014b) and text categorization (Ji and Smith, 2017).",
"Text segmentation (Hearst, 1994; Choi, 2000; Eisenstein and Barzilay, 2008; Koshorek et al., 2018) is another well studied discourse analysis task that aims to divide a text into a sequence of topically coherent segments and has been shown useful for text summarization (Barzilay and Lee, 2004), sentiment analysis (Sauper et al., 2010) and dialogue systems (Shi et al., 2019).",
"The news discourse profiling task is complementary to the well-established discourse analysis tasks and is likely to further benefit many NLP applications.",
"First, it studies genre-specific discourse structures, while the aforementioned discourse analysis tasks study genre independent general discourse structures and thus fail to incorporate domain knowledge.",
"Second, it focuses on understanding global content organization structures with the main event at the center, while the existing tasks focus on either understanding rhetorical aspects of discourse structures (RST and PDTB discourse parsing) or detecting shallow topic transition structures (text segmentation).",
"Genre-specific functional structures have been studied based on different attributes, but mostly for genres other than news articles.",
"Liddy (1991), Kircz (1991) and Teufel et al. (1999) used rhetorical status and argumentation type to both define functional theories and create corpora for scientific articles.",
"Mizuta et al. (2006), Wilbur et al. (2006), Waard et al. (2009) and Liakata et al. (2012) extensively studied functional structures in biological domain with multiple new annotation schemata.",
"Past studies on functional structures of news articles have been mainly theoretical.",
"Apart from Van Dijk's theory of news discourse (Teun A, 1986; Van Dijk, 1988b), Pan and Kosicki (1993) proposed framing-based approach along four structural dimensions: syntactic, script, thematic and rhetorical, of which syntactic structure is similar to the Dijk's theory.",
"Owing to the high specificity of the Dijk's theory, Yarlott et al. (2018) performed a pilot study for its computational feasibility and annotated a small dataset of 50 documents taken from the ACE Phase 2 corpus (Doddington et al., 2004).",
"However, as mentioned in the paper, their annotators were given minimal training prior to annotations, consequently, the kappa inter-agreement (55%) between two annotators was not satisfactory.",
"In addition, coverage of their annotated dataset on broad event domains and media sources was unclear.",
"The only studies on functional structure of news article with sizable dataset include Baiamonte et al. (2016) that coarsely separates narration from descriptive contents and Friedrich and Palmer (2014) that classify clauses based on their aspectual property.",
"We consider sentences to be units of discourse-and define eight schematic categories to study their roles within the context of the underlying topic.",
"The original Van Dijk's theory was designed for Main Content Fine-grained type (1) U.S. President Donald Trump tried on Tuesday to calm a storm over his failure to hold Russian President Vladimir Putin accountable for meddling in the 2016 U.S. election, saying he misspoke in a joint news conference in Helsinki.",
"analyzing discourse functions of individual paragraphs w.r.t the main event, and the pilot study done by Yarlott et al. (2018) also considered paragraphs as units of annotations.",
"Observing that some paragraphs contain more than one type of contents, we decided to conduct sentence-level annotations instead to minimize disagreements between annotators.",
"and allow consistent annotations 2 .",
"Table 1 contains an example for each content type.",
"Consistent with the theory presented by Van Dijk, the categories are theoretical and some of them may not occur in every news article.",
"Main content describes what the text is about, the most relevant information of the news article.",
"It describes the most prominent event and its consequences that render the highest level topic of the news report.",
"Main Event (M1) introduces the most important event and relates to the major subjects in a news report.",
"It follows strict constraints of being the most recent and relevant event, and directly monitors the processing of remaining document.",
"Categories of all other sentences in the document are interpreted with respect to the main event.",
"Consequence (M2) informs about the events that are triggered by the main news event.",
"They are either temporally overlapped with the main event or happens immediately after the main event.",
"2 Our two annotators agreed that the majority of sentences describe one type of content.",
"For a small number of sentences that contain a mixture of contents, we ask our annotators to assign the label that reflects the main discourse role of a sentence in the bigger context.",
"Context-informing sentences provide information related to the actual situation in which main event occurred.",
"It includes the previous events and other contextual facts that directly explain the circumstances that led to the main event.",
"Previous Event (C1) describes the real events that preceded the main event and now act as possible causes or preconditions for the main event.",
"They are restricted to events that have occurred very recently, within last few weeks.",
"Current Context (C2) covers all the information that provides context for the main event.",
"They are mainly used to activate the situation model of current events and states that help to understand the main event in the current social or political construct.",
"They have temporal co-occurrence with the main event or describe the ongoing situation.",
"Finally, sentences containing the least relevant information, comprising of unverifiable or hypothetical facts, opinionated statements, future projections and historical backgrounds, are classified as distantly-related content.",
"Historical Event (D1) temporally precedes the main event in months or years.",
"It constitutes the past events that may have led to the current situation, or indirectly relates to the main event or subjects of the news article.",
"Anecdotal Event (D2) includes events with specific participants that are difficult to verify.",
"It may include fictional situations or personal account of incidents of an unknown person especially aimed to exaggerate the situation.",
"Evaluation (D3) introduces reactions from immediate participants, experts or known personalities that are opinionated and may also include explicit opinions of the author or those of the news source.",
"They are often meant to describe the social or political implications of the main event or evaluation of the current situation.",
"Typically, it uses statements from influential people to selectively emphasize on their viewpoints.",
"Expectation (D4) speculates on the possible consequences of the main or contextual events.",
"They are essentially opinions, but with far stronger implications where the author tries to evaluate the current situation by projecting possible future events.",
"In parallel with discourse profiling annotations, we also identify sentences that contain direct quotes or paraphrased comments stated directly by a human and label them as Speech.",
"We assign a binary label, Speech vs. Not Speech, to each sentence independently from the annotations of the above eight schematic discourse roles.",
"Note that Speech sentences may perfectly be annotated with any of the eight news discourse roles based on their contents, although we expect Speech sentences to serve certain discourse roles more often, such as evaluation and expectation.",
"The Van Dijk's theory was originally based on case studies of specific news reports.",
"To accommodate wider settings covering different news domains and sources, we made several minor modifications to the original theory.",
"First, we label both comments made by external sources (labeled as verbal reac-tions in the original theory) and comments made by journalistic entities as speech, and label speech with content types as well.",
"Second, we added a new category, anecdotal event (D2), to distinguish unverifiable anecdotal facts from other contents.",
"Anecdotal facts are quite prevalent in the print media.",
"Third, we do not distinguish news lead sentences that summarize the main story from other Main Event (M1) sentences, considering that lead sentences pertain to the main event and major subjects of a news.",
"The NewsDiscourse corpus consists of 802 openly accessible news articles containing 18,155 sentences 3 annotated with one of the eight content 3 Note that only sentences within the body of the news article are considered for annotation and headlines are considered",
"types or N/A (sentences that do not contribute to the discourse structure such as photo captions, text links for images, etc.) as well as Speech labels.The documents span across the domains of business, crime, disaster and politics from three major news sources that report global news and are widely used: NYT (USA), Reuters (Europe) and Xinhua (China).",
"We include 300 articles each (75 per domain) from Reuters and Xinhua that are collected by crawling the web and cover news events between 2018-19.",
"NYT documents are taken from existing corpora, including 102 documents from KBP 2015 4 (Ellis et al., 2015) and 100 documents (25 per domain) from the annotated NYT corpus (Evan, 2008).",
"We trained two annotators for multiple iterations before we started the official annotations.",
"In the beginning, each annotator completed 100 common documents (Eight from each of the domains and sources and four from the KBP) within the corpus to measure annotator's agreement.",
"The two annotators achieved Cohen's score (Cohen, 1968) of 0.69144,0.72389 and 0.87525 for the eight fine-grained, three coarse-grained and Speech label annotations respectively.",
"Then, the remaining documents from each domain and news source were split evenly between the two annotators.",
"Detailed distributions of the created corpus, including distributions of different content types across domains and media sources are reported in Tables 2 and 3 respectively.",
"We find that distributions of content types vary depending on either domains or media sources.",
"For instance, disaster documents report more consequences (M2) and anecdotal events (D2), crime documents contain more previous events (C1) and historical events (D1), while politics documents have the most opinionated contents (sentences in categories D3 and D4) immediately followed by business documents.",
"Furthermore, among different sources, NYT articles are the most opinionated and describe historical events most often, followed by Reuters.",
"In contrast, Xinhua articles has relatively more sentences describing the main event.",
"Speech labels and content type labels are separately annotated and each sentence has both a content type label and a speech label (binary, speech as independent content.",
"We used NLTK (Bird et al., 2009) to identify sentence boundaries in the body text.",
"Occasionally, one sentence is wrongly split into multiple sentences, the annotators were instructed to assign them with the same label.",
"4 KBP documents are not filtered for different domains due to the small size of corpus.",
"vs. not speech).",
"In the created corpus, 5535 out of 18,155 sentences are labeled as speech.",
"A wide range of computational models has been applied for extracting different forms of discourse structures.",
"However, across several tasks, neural network methods (Ji and Eisenstein, 2015; Becker et al., 2017) are found the most effective, with relatively superior performance obtained by modeling discourse-level context (Dai and Huang, 2018a,b).",
"As an initial attempt, we use a hierarchical neural network to derive sentence representations and a document encoding, and model associations between each sentence and the main topic of the document when determining content types for sentences.",
"Shown in Figure 1, it first uses a word-level bi-LSTM layer (Hochreiter and Schmidhuber, 1997) with soft-attention over word representations to generate intermediate sentence representations which are further enriched with the context information using another sentence-level bi-LSTM.",
"Enriched sentence representations are then averaged with their soft-attention weights to generate document encoding.",
"The final prediction layers model associations between the document encoding and each sentence encoding to predict sentence types.",
"Context-aware sentence encoding: Let a document be a sequence of sentences { s 1 , s 2",
"..s n } , which in turn are sequences of words { ( w 11 , w 12 .. )",
"..",
"( w n 1 , w n 2 , .. ) } .",
"We first transform a sequence of words in each sentence to contextualized word representations using ELMo (Peters et al., 2018) followed by a word-level biLSTM layer to obtain their hidden state representations H s .",
"Then, we take weighted sums of hidden representations using soft-attention scores to obtain intermediate sen-Figure 1: Neural-Network Architecture Incorporating Document Encoding for Content Type Classification tence encodings ( S i ) that are uninformed of the contextual information.",
"Therefore, we apply another sentence-level biLSTM over the sequence of sentence encodings to model interactions among sentences and smoothen context flow from the headline until the last sentence in a document.",
"The hidden states ( H t ) of the sentence-level bi-LSTM are used as sentence encodings.",
"Document Encoding: We generate a reference document encoding, as a weighted sum over sentence encodings using their soft-attention weights.",
"Sentence types are interpreted with respect to the main event.",
"However, while the sentence-level biLSTM augments sentence representations with the local context, they may be still unaware of the main topic.",
"Therefore, we compute element-wise products and differences between the document encoding and a sentence encoding to measure their correlations, and further concatenate the products and differ-Models M1 M2 C1 C2 D1 D2 D3 D4 Macro Micro F1 P R F1 F1 Feature-based (SVM) 34.0 8.0 18.0 44.0 45.0 14.0 52.0 44.0 39.1 37.9 38.3 45.7 Basic Classifier 42.5 24.7 18.2 55.4 59.6 28.5 66.1 52.5 52.6 47.9 48.8( 0.8) 57.5( 0.6) Document LSTM 49.3 27.3 20.2 57.0 63.6 45.8 67.4 55.6 56.6 52.6 53.2( 0.7) 60.2( 1.0) +Headline 49.8 30.0 21.8 56.7 63.2 42.7 66.8 58.7 57.3 52.9 53.8( 0.7) 60.4( 1.0) + Document encoding 49.6 27.9 22.5 58.1 64.1 48.1 67.4 57.6 56.9 53.7 54.4( 0.8) 60.9( 0.7) CRF Fine-grained 47.7 26.4 22.2 56.0 63.3 45.2 66.4 55.2 55.4 52.9 52.9( 1.4) 59.4( 1.1) CRF Coarse-grained 48.4 29.3 21.6 55.9 62.9 47.2 66.7 54.2 55.6 53.4 53.5( 0.9) 59.6( 0.7) Table 4: Performance of different systems on fine-grained discourse content type classification task.",
"ences with the sentence encoding to obtain the final sentence representation that is used for predicting its sentence type.",
"Predicting Sentence Types: First, we use a two layer feed forward neural network as a regular classifier to make local decisions for each sentence based on the final sentence representations.",
"In addition, news articles are known to follow inverted pyramid (Bell, 1998) or other commonly used styles where the output labels are not independent.",
"Therefore, we also use a linear chain CRF (Lafferty et al., 2001) layer on the output scores of the local classifier to model dependence among discourse labels.",
"We split 802 documents into training/dev/test sets of 502/100/200 documents.",
"The training set includes 50 documents from each domain in Reuters and Xinhua, 9 documents from each domain in NYT and 66 documents from KBP; the dev set includes 8 documents from each domain and source and 4 documents from KBP; and the test set includes 17 documents from each domain in Reuters and Xinhua, 8 documents from each domain in NYT and 32 documents from KBP.",
"The dataset is released with the standard split we used in our experiments.",
"For evaluation, we calculate F1 score for each content type as well as micro and macro F1 scores.",
"Feature-based (SVM) uses linear SVM classifier (Pedregosa et al., 2011) over features used by Yarlott et al. (2018), including bag of words, tf-idf and 100-dimensional paragraph vectors obtained through Doc2Vec (Le and Mikolov, 2014) implementation in Gensim ( Rehurek and Sojka, 2010).",
"Following Yarlott et al. (2018), we set minimum to 0.01, minimum word count to 5 for Doc2Vec model and train it for 50 epochs.",
"All three features are built on the entire training corpus and the value of C in SVM classifier is set to 10.",
"Basic Classifier uses only the word-level bi-LSTM with soft-attention to learn sentence representations followed by the local feed forward neural network classifier to make content type predictions.",
"Document LSTM adds the sentence-level BiLSTM over sentence representations obtained from the word-level BiLSTM to enrich sentence representations with local contextual information.",
"+Document Encoding uses document encoding for modeling associations with the main topic and obtains the final sentence representations as described previously.",
"+Headline replaces document encoding with headline sentence encoding generated from the word-level biLSTM.",
"Headline is known to be a strong predictor for the main event (Choubey et al., 2018).",
"CRF Fine-grained and CRF Coarse-grained adds a CRF layer to make content type predictions for sentences which models dependencies among fine-grained (eight content types) and coarse-grained (main vs. context-informing vs. supportive contents) content types respectively.",
"We set hidden states dimension to 512 for both word-level and sentence-level biLSTMs in all our models.",
"Similarly, we use two-layered feed forward networks with 1024-512-1 units to calculate attention weights for both the BiLSTMs.",
"The final classifier uses two-layer feed forward networks with 3072-1024-9 units for predicting sentence types.",
"All models are trained using Adam (Kingma and Ba, 2014) optimizer with the learning rate of 5e-5.",
"For regularization, we use dropout (Srivas-tava et al., 2014) of 0.5 on the output activations Systems P R F1 Feature-based (SVM) 61.0 71.0 69.0 Basic Classifier 81.6 80.7 81.2( 0.4) Document LSTM 80.7 83.6 82.2( 0.7) Table 5: Performance of different systems on Speech label classification task.",
"of both BiLSTMs and all neural layers.",
"Word em-beddings are kept fixed during the training.",
"All the neural model are trained for 15 epochs and we use the epoch yielding the best validation performance.",
"To alleviate the influence of randomness in neural model training and obtain stable experimental results, we run each neural model ten times with random seeds and report the average performance.",
"Tables 4 and 5 show the results from our experiments for content-type and speech label classification tasks.",
"We see that a simple word-level biLSTM based basic classifier outperforms features-based SVM classifier (Yarlott et al., 2018) by 10.5% and 11.8% on macro and micro F1 scores respectively for content-type classification.",
"Adding a sentence-level BiLSTM helps in modeling contextual continuum and improves performance by additional 4.4% on macro and 2.7% on micro F1 scores.",
"Also, as content types are interpreted with respect to the main event, modeling associations between a sentence representation and the referred main topic representation using headline or document embed-dings improves averaged macro F1 score by 0.6% and 1.2% respectively.",
"Empirically, the model using document embedding performs better than the one with headline embedding by 0.6% implying skewed headlining based on recency which is quite prevalent in news reporting.",
"We further aim to improve the performance by using CRF models to capture interdependencies among different content types, however, CRF models using both fine-grained and coarse-grained label transitions could not exceed a simple classifier model.",
"The inferior performance of CRF models can be explained by variations in news content organization structures (such as inverted pyramid, narrative, etc.), further implying the need to model those variations separately in future work.",
"Similarly, for speech label classification task, word-level biLSTM model achieves 12.2% higher F1 score compared to the feature-based SVM classifier which is further improved by 1.0% with M1 M2 C1 C2 D1 D2 D3 D4 N/A M1 88.0 2.6 9.0 38.2 14.6 0.4 123.2 28.",
"We generated confusion matrix (Table",
"6) for content-type classification based on prediction results of the best performing model Document LSTM + Document Encoding on the dev set.",
"Prediction errors mainly occur between Main Event (M1) and Current Context / Evaluation (C2/D3), between Previous Event (C1) and Current Context (C2), between Evaluation (D3) and Expectation (D4), and between Current Context (C2) and Historical Event / Evaluation (D1/D3).",
"We envision that news discourse profiling can be useful to many discourse level NLP tasks and applications.",
"As an example, we investigate uses of news structures for event coreference resolution by analyzing 102 documents from the KBP 2015 corpus included in our NewsDiscourse Corpus .",
"We analyze the lifespan and spread of event coreference chains over different content types.",
"First, table 7 shows the percentage of events that are singletons out of all the events that appear in sentences of each content type.",
"We can see that in contrast to main event sentences (M1), other types of sentences are more likely to contain singleton events.",
"We further analyze characteristics of non-singleton events, to identify positions of their coreferential mentions and the spread of coreference chains in a document.",
"Motivated by van Dijk's theory, we hypothesize that the main events appear in each type of sentences, but the likelihoods of M1 M2 C1 C2 D1 D2 D3 D4 58% 15% 23% 15% 10% 9% 14% 14% Table 8: Percentages of Sentences of each content type that contain a headline main event.",
"seeing the main events in a sentence may vary depending on the sentence type.",
"We consider events that appear in the news headline to approximate the main events of a news article.",
"As shown in Table 8, around 58% 5 of main event sentences (M1) contain at least one headline event, in addition, context-informing sentences (C1+C2), especially sentences focusing on discussing recent pre-cursor events (C1), are more likely to mention headline events as well.",
"Other than the main events, we observe that many events have all of their coreferential mentions appear within sentences of the same content type.",
"We call such events intra-type events .",
"In other words, an intra-type event chain starts from a sentence of any type will die out within sentences of the same content type.",
"Table 9 shows the percentage of intra-type event chains out of all the event chains that begin in a certain type of sentence.",
"We can see that non-main contents (e.g., content types C2-D3) are more likely to be self-contained from introducing to finishing describing an event.",
"In particular, historical (D1) and anecdotal (D2) contents exhibit an even stronger tendency of having intra-type event repetitions compared to other non-main content types.",
"Incorporating Content Structure for Event Coreference Resolution: We incorporate news functional structures for event coreference resolution by following the above analysis and implementing content structure informed constraints in 5 While all the main event sentences are expected to mention some main event, we use headline events to approximate main events and headline events do not cover all the main events of a news article.",
"As shown in our previous work (Choubey et al., 2018), identifying main events is a challenging task in its own right and main events do not always occur in the headline of a news article.",
"In addition, event annotations in the KBP corpora only consider a limited set of event types, seven types specifically, therefore, if main events do not belong to those seven types, they are not annotated as events, which also contributes to the imperfect percentage of main event sentences containing a headline event.",
"an Integer Linear Programming (ILP) inference system to better identify singleton mentions, main event mentions and intra-type event mentions.",
"We use the Document LSTM+Document encoding classifier to predict sentence content types.",
"In addition, we built a discourse-aware event singleton classifier, that resembles the sentence type classifier, to identify singleton event mentions in a document.",
"Specifically, the singleton classifier combines document and sentence representations provided by the content type classifier with contextualized event word representations obtained from a separate word-level biLSTM layer with 512 hidden units.",
"Then, the singleton classifier applies a two-layer feed forward neural network to identify event singletons, and the feed forward network has 3072-512-2 units.",
"We implement ILP constraints based on system predicted content types of sentences and singleton scores of event mentions.",
"Detailed descriptions of ILP constraints we implemented and their equations are included in the appendix.",
"The ILP formulation has been used in our previous work that yields the previous best system for event coreference resolution (Choubey and Huang, 2018), which aims to capture several specific document level distributional patterns of coreferential event mentions by simply using heuristics.",
"For direct comparisons, we adopt the same experimental settings as in Choubey and Huang (2018), using KBP 2015 documents as the training data and using both KBP 2016 and KBP 2017 corpora for evaluation 6 .",
"We retrained the sentence type classifier using 102 KBP 2015 documents annotated with content types, using 15 documents as the development set and the rest as the training data.",
"We trained the event singleton classifier using the same train/dev split.",
"In addition, we used the same event mentions and pairwise event coreference scores produced by a local pairwise classifier the same as in Choubey and Huang (2018) 7 .",
"content-6 All the KBP corpora include documents from both discussion forum and news articles.",
"But as the goal of this study is to leverage discourse structures specific to news articles for improving event coreference resolution performance, we only evaluate the ILP system using news articles in the KBP corpora.",
"This evaluation setting is consistent with our previous work Choubey and Huang (2018).",
"For direct comparisons, the results reported for all the systems and baselines are based on news articles in the test datasets as well 7 The classifier can be obtained from https://git.",
"structure aware ILP system with a baseline system (the row Local classifier ) that performs greedy merging of event mentions using local classifier predicted pairwise coreference scores as well as two most recent models for event coreference resolution, the heuristics-based ILP system (Choubey and Huang, 2018) and another recent system (Lu and Ng, 2017).",
"We use the same evaluation method as in (Choubey and Huang, 2018) and evaluate event coreference resolution results directly without requiring event mention type match 8 .",
"Table 10 shows experimental results.",
"Event coreference resolution is a challenging task as shown by the small margins of performance gains achieved by recent systems.",
"The ILP model constrained by system predicted content structures (the row +Content Structure ) outperforms the pairwise classifier baseline system as well as the two most recent systems consistently across all the evaluation metrics over the two benchmark datasets.",
"In particular, our ILP system outperforms the previous state-of-the-art, the heuristics-based ILP system Choubey and Huang, with average F1 gains of 0.67% and 1.32% on KBP 2016 and KBP 2017 corpora respectively.",
"The superior performance shows that systematically identified content structures are more effective than heuristics in guiding event linking, and establishes the usefulness of the new discourse profiling task.",
"To further evaluate the importance of ILP constraints on Singletons, Main events and Intra-type events, we perform ablation experiments by removing each constraint from the full ILP model.",
"Based on the results in Table 10, all the three types of constraints have noticeable impacts to coreference performance, and singletons and main events constraints contribute the most.",
"8 The official KBP 2017 event coreference resolution scorer considers two event mentions coreferent if they strictly match on their event type and subtype, which requires building a high-performing event type identification system to enable an event coreference resolver to score well.",
"Intuitively, news content structures can help in identifying other event relations as well, such as temporal and causal relations, and thus disentangling complete event structures.",
"For instance, events occurring in C1 (Previous Event) sentences are probable cause for the main event which in turn causes events in M2 (Consequence) sentences (the same rationale can be applied for temporal order).",
"We have created the first broad-coverage corpus of news articles annotated with a theoretically grounded functional discourse structure.",
"Our initial experiments using neural models ascertain the feasibility of this task.",
"We conducted experiments and demonstrated the usefulness of news discourse profiling for event coreference resolution.",
"In the future, we will further improve the performance of news discourse profiling by investigating sub-genres of news articles, and extensively explore its usage for various other NLP tasks and applications.",
"We thank our anonymous reviewers for providing insightful review comments.",
"We gratefully acknowledge support from National Science Foundation via the awards IIS-1942918 and IIS-1755943.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.",
"Disclaimer: the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government."
] | [
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Word alignment, which aims to align translationally equivalent words between source and target sentences, plays an important role in many natural language processing tasks.",
"Current unsupervised neural alignment methods focus on inducing alignments from neural machine translation models, which does not leverage the full context in the target sequence.",
"In this paper, we propose MASK-ALIGN , a self-supervised word alignment model that takes advantage of the full context on the target side.",
"Our model parallelly masks out each target token and predicts it conditioned on both source and the remaining target tokens.",
"This two-step process is based on the assumption that the source token contributing most to recovering the masked target token should be aligned.",
"We also introduce an attention variant called leaky attention , which alleviates the problem of high cross-attention weights on specific tokens such as periods.",
"Experiments on four language pairs show that our model outperforms previous unsupervised neural aligners and obtains new state-of-the-art results.",
"1 1 Introduction Word alignment is an important task of finding the correspondence between words in a sentence pair (Brown et al., 1993) and used to be a key component of statistical machine translation (SMT) (Koehn et al., 2003; Dyer et al., 2013).",
"Although word alignment is no longer explicitly modeled in neural machine translation (NMT) (Bahdanau et al., 2015; Vaswani et al., 2017), it is often leveraged to analyze NMT models (Tu et al., 2016; Ding et al., 2017).",
"Word alignment is also used in many other scenarios such as imposing lexical constraints on the decoding process (Arthur et al., 2016; Hasler Corresponding author 1 Code can be found at https://github.com/THUNLP-MT/ Mask-Align.",
"Induced alignment link: Tokio Tokyo",
"et al., 2018), improving automatic post-editing (Pal et al., 2017) , and providing guidance for translators in computer-aided translation (Dagan et al., 1993).",
"Compared with statistical methods, neural methods can learn representations end-to-end from raw data and have been successfully applied to supervised word alignment (Yang et al., 2013; Tamura et al., 2014).",
"For unsupervised word alignment, however, previous neural methods fail to significantly exceed their statistical counterparts such as FAST-ALIGN (Dyer et al., 2013) and GIZA++ (Och and Ney, 2003).",
"Recently, there is a surge of interest in NMT-based alignment methods which take alignments as a by-product of NMT systems (Li et al., 2019; Garg et al., 2019; Zenkel et al., 2019, 2020; Chen et al., 2020).",
"Using attention weights or feature importance measures to induce alignments for to-be-predicted target tokens, these methods outperform unsupervised statistical aligners like GIZA++ on a variety of language pairs.",
"Although NMT-based unsupervised aligners have proven to be effective, they suffer from two major limitations.",
"First, due to the autoregressive w 1 + p 1 w 2 + p 2 w 4 + p 4 EncoderLayer w 3 + p 3 h1 h 2 h 4 h 3 w 1 +p 1 p 1 w 2 +p 2 p 2 w 3 +p 3 p 3 w 4 +p 4 p 4 t 4 t 3 Static-KV Attention Leaky Attention Feed Forward L L (cid:72)(cid:76)(cid:81) (cid:75)(cid:88)(cid:81)(cid:71) (cid:85)(cid:68)(cid:81)(cid:81)(cid:87)(cid:72) (cid:90)(cid:72)(cid:74) (cid:68) (cid:71)(cid:82)(cid:74) (cid:85)(cid:88)(cid:81)(cid:86) (cid:68)(cid:90)(cid:68)(cid:92) (cid:68) t 1 t 2 t 1 t 2 t 3 t 4 h 1 h 2 h 3 h 4 Linear & Softmax (cid:71)(cid:82)(cid:74) (cid:85)(cid:88)(cid:81)(cid:86) (cid:68)(cid:90)(cid:68)(cid:92) Attention Weights Alignment (cid:68) (cid:71)(cid:82)(cid:74) (cid:85)(cid:88)(cid:81)(cid:86) (cid:68)(cid:90)(cid:68)(cid:92) (cid:72)(cid:76)(cid:81) (cid:75)(cid:88)(cid:81)(cid:71) (cid:85)(cid:68)(cid:81)(cid:81)(cid:87)(cid:72) (cid:90)(cid:72)(cid:74) Figure 2: The architecture of MASK-ALIGN .",
"property of NMT systems (Sutskever et al., 2014), they only leverage part of the target context.",
"This inevitably brings noisy alignments when the prediction is ambiguous.",
"Consider the target sentence in Figure 1. When predicting Tokyo, an NMT system may generate 1968 because future context is not observed, leading to a wrong alignment link (1968, Tokyo).",
"Second, they have to incorporate an additional guided alignment loss (Chen et al., 2016) to outperform GIZA++.",
"This loss requires pseudo alignments of the full training data to guide the training of the model.",
"Although these pseudo alignments can be utilized to partially alleviate the problem of ignoring future context, they are computationally expensive to obtain.",
"In this paper, we propose a self-supervised model specifically designed for the word alignment task, namely MASK-ALIGN .",
"Our model parallelly masks out each target token and recovers it conditioned on the source and other target tokens.",
"Figure 1 shows an example where the target token Tokyo is masked out and re-predicted.",
"Intuitively, as all source tokens except Tokio can find their counterparts on the target side, Tokio should be aligned to the masked token.",
"Based on this intuition, we assume that the source token contributing most to recovering a masked target token should be aligned to that target token.",
"Compared with NMT-based methods, MASK-ALIGN is able to take full advantage of bidirectional context on the target side and hopefully achieves higher alignment quality.",
"We also introduce an attention variant called leaky attention to reduce the high attention weights on specific tokens such as periods.",
"By encouraging agreement between two directional models both for training and inference, our method consistently outperforms the state-of-the-art on four language pairs without using guided alignment loss.",
"Figure 2 shows the architecture of our model.",
"The model predicts each target token conditioned on the source and other target tokens and generates alignments from the attention weights between source and target (Section 2.1).",
"Specifically, our approach introduces two attention variants, static-KV attention and leaky attention , to efficiently obtain attention weights for word alignment.",
"To better utilize attention weights from two directions, we encourage agreement between two unidirectional models during both training (Section 2.2) and inference (Section 2.3).",
"Conventional unsupervised neural aligners are based on NMT models (Peter et al., 2017; Garg et al., 2019).",
"Given a source sentence x = x 1 , . . . , x J and a target sentence y = y 1 , . . . , y I , NMT models the probability of the target sentence conditioned on the source sentence: P ( y | x ; ) = I (cid:89) i =1 P ( y i | y <i , x ; ) (1) where y <i is a partial translation.",
"One problem of this type of approaches is that they fail to exploit the future context on the target side, which is probably helpful for word alignment.",
"y i conditioned on the source sentence x and the remaining target tokens y \\ y i :",
"This equals to masking out each y i and then recovering it.",
"We build our model on top of Transformer (Vaswani et al., 2017) which is the state-of-the-art sequence-to-sequence architecture.",
"Next, we will discuss in detail the implementation of our model.",
"As self-attention is fully-connected, directly computing (cid:81) Ii =1 P ( y i | y \\ y i , x ; ) with a vanilla Transformer requires I separate forward passes, in each of which only one target token is masked out and predicted.",
"This is costly and time-consuming.",
"Therefore, how to parallelly mask out and predict all target tokens in a single pass is important.",
"To do so, a major challenge is to avoid the representation of a masked token getting involved in the prediction process of itself.",
"Inspired by Kasai et al. (2020), we modify the self-attention in the Transformer decoder to perform the forward passes concurrently.",
"Given the word embedding w i and position embedding p i for target token y i , we first separate the query inputs q i from key k i and value inputs v i to prevent the to-be-predicted token itself from participating in the prediction: q i = p i WQ (3) k i = ( w i + p i ) WK (4) v i = ( w i + p i ) WV (5) where WQ , WK and WV are parameter matrices.",
"The hidden representation h i for y i is computed by attending to keys and values, K (cid:54) = i and V (cid:54) = i , that correspond to the remaining tokens y \\ y i : h i = Attention( q i , K (cid:54) = i , V (cid:54) = i ) (6) K (cid:54) = i = Concat( { k m | m (cid:54) = i } ) (7) V (cid:54) = i = Concat( { v m | m (cid:54) = i } ) (8) In this way, we ensure that h i is isolated from the word embedding w i in a single decoder layer.",
"However, there exists a problem of information leakage if we update the key and value inputs for each position across decoder layers since they will contain the representation of each position from previous layers.",
"Therefore, we keep the key and value inputs unchanged and only update the query inputs I was born in in 1968 Ich wurde 1968 in Tokio geboren Tokyo .",
"where q li and h li denote the query inputs and hidden states for y i in the l -th layer, respectively.",
"h 0 i is initialized with p i .",
"We name this variant of attention the static-KV attention .",
"By static-KV, we mean the keys and values are unchanged across different layers in our approach.",
"Our model replaces all self-attention in the decoder with static-KV attention.",
"Extracting alignments from vanilla cross-attention often suffers from the high attention weights on some specific source tokens such as periods, [EOS], or other high frequency tokens (see Figure 3).",
"This is similar to the garbage collectors effect (Moore, 2004) in statistical aligners, where a source token is aligned to too many target tokens.",
"Hereinafter, we will refer to these tokens as collectors .",
"As a result of such effect, many target tokens (e.g., the 0.5 0.5 not true falsch 1.0 1.0 not true falsch not true falsch 0.4 0.4 0.2 [NULL] 0.2 0.4 not true falsch 0.8 0.6 [NULL] leaky attention vanilla attention Figure 4: An illustrative example of the attention weights from two directional models using vanilla and leaky attention. Leaky attention provides a leak position [NULL] to collect extra attention weights. two ins in Figure 3) will be incorrectly aligned to the collectors according to the attention weights.",
"This phenomenon has been studied in previous works (Clark et al., 2019; Kobayashi et al., 2020).",
"Kobayashi et al. (2020) show that the norms of the value vectors for the collectors are usually small, making their influence on attention outputs actually limited.",
"We conjecture that this phenomenon is due to the incapability of NMT-based aligners to deal with tokens that have no counterparts on the other side because there is no empty (NULL) token that is widely used in statistical aligners (Brown et al., 1993; Och and Ney, 2003).",
"We propose to explicitly model the NULL token with an attention variant, namely leaky attention .",
"As shown in Figure 4, when calculating cross-attention weights, leaky attention provides an extra leak position in addition to the encoder outputs.",
"Acting as the NULL token, this leak position is expected to address the biased attention weight problem.",
"To be specific, we parameterize the key and value vectors as k NULL and v NULL for the leak position in the cross-attention, and concatenate them with the transformed vectors of the encoder outputs.",
"The attention output z i is computed as follows: z i = Attention( h Li WQ , K , V ) (11) K = Concat( k NULL , H enc WK ) (12) V = Concat( v NULL , H enc WV ) (13) where H enc denotes encoder outputs.",
"2 A similar attention implementation can be found in https://github.com/pytorch/fairseq/blob/master/fairseq/ modules/multihead attention.py.",
"deviation to initialize k NULL and v NULL to ensure that their initial norms are rather small.",
"When extracting alignments, we only consider the attention matrix without the leak position.",
"Note that leaky attention is different from adding a special token in the source sequence, which will share the same high attention weights with the existing collector instead of calibrating it (Vig and Be-linkov, 2019).",
"Our parameterized method is more flexible than Leaky-Softmax (Sabour et al., 2017) which adds an extra dimension with the value of zero to the routing logits.",
"In Section 2.2, we will show that leaky attention is also helpful for applying agreement-based training on two directional models.",
"We remove the cross-attention in all but the last decoder layer.",
"This makes the interaction between the source and target restricted in the last layer.",
"Our experiments demonstrate that this modifica-tion improves alignment results with fewer model parameters.",
"To better utilize the attention weights from two directions, we apply an agreement loss in the training process to improve the symmetry of our model, which has proven effective in statistical alignment models (Liang et al., 2006; Liu et al., 2015).",
"Given a parallel sentence pair (cid:104) x , y (cid:105) , we can obtain the attention weights from two different directions, denoted as W x y and W y x .",
"As alignment is bijective, W x y is supposed to be equal to the transpose of W y x .",
"We encourage this kind of symmetry through an agreement loss: L a = MSE (cid:16) W x y , W (cid:62) y x (cid:17) (14) where MSE represents the mean squared error.",
"For vanilla attention, L a is hardly small because of the normalization constraint.",
"As shown in Figure 4, due to the use of softmax activation, the minimal value of L a is 0 .",
"25 for vanilla attention.",
"Using leaky attention, our approach can achieve a lower agreement loss ( L a = 0.1) by adjusting the weights on the leak position.",
"However, our model may converge to a degenerate case of zero agreement loss where attention weights are all zero except for the leak position.",
"We circumvent this case by introducing an entropy loss on the attention weights: L e, x y = 1 II (cid:88) i =1 J (cid:88) j =1 W ij x y log W ij (15) W ij x y = W ij x y + (cid:80) j ( W ij x y + ) (16) where W ij x y is the renormalized attention weights and is a smoothing hyperparamter.",
"Similarly, we have L e, y x for the inverse direction.",
"We jointly train two directional models using the following loss: L = L x y + L y x + L a + ( L e, x y + L e, y x ) (17) where L x y and L y x are NLL losses, and are hyperparameters.",
"When extracting alignments, we compute an alignment score S ij for y i and x j as the harmonic mean of attention weights W ij x y and W ji y x from two directional models:",
"S ij = 2 W ij x y W ji y x W ij x y + W ji y x",
"We use the harmonic mean because we assume a large S ij requires both W ij x y and W ji y x to be large.",
"Word alignments can be induced from the alignment score matrix as follows: A ij = (cid:26) 1 if S ij 0 otherwise (19) where is a threshold.",
"We conducted our experiments on four public datasets: German-English (De-En), English-French (En-Fr), Romanian-English (Ro-En) and Chinese-English (Zh-En).",
"The Chinese-English training set is from the LDC corpus that consists of 1.2M sentence pairs.",
"For validation and testing, we used the Chinese-English alignment dataset from Liu et al. (2005) 3 , which contains 450 sentence pairs for validation and 450 for testing.",
"For other three language pairs, we followed the experimental setup in 3 http://nlp.csai.tsinghua.edu.cn/ ly/systems/ TsinghuaAligner/TsinghuaAligner.html (Zenkel et al., 2019, 2020) and used the preprocessing scripts from Zenkel et al. (2019) 4 .",
"Following Ding et al. (2019), we take the last 1000 sentences of the training data for these three datasets as validation sets.",
"We used a joint source and target Byte Pair Encoding (BPE) (Sennrich et al., 2016) with 40k merge operations.",
"During training, we filtered out sentences with the length of 1 to ensure the validity of the masking process.",
"We implemented our model based on the Transformer architecture (Vaswani et al., 2017).",
"The encoder consists of 6 standard Transformer encoder layers.",
"The decoder is composed of 6 layers, each of which contains static-KV attention while only the last layer is equipped with leaky attention.",
"We set the embedding size to 512, the hidden size to 1024, and attention heads to 4. The input and output embeddings are shared for the decoder.",
"We trained the models with a batch size of 36K tokens.",
"We used early stopping based on the prediction accuracy on the validation sets.",
"We tuned the hyperparameters via grid search on the Chinese-English validation set as it contains gold word alignments.",
"In all of our experiments, we set = 0 .",
"05 (Eq.",
"(16)), = 5 , = 1 (Eq.",
"(17)) and = 0 .",
"2 (Eq.",
"(19)).",
"The evaluation metric is Alignment Error Rate (AER) (Och and Ney, 2000).",
"We introduce the following unsupervised neural baselines besides two statistical baselines FASTALIGN and GIZA++:",
"NAIVE-ATT (Garg et al., 2019): a method that induces alignments from cross-attention weights of the best (usually penultimate) decoder layer in a vanilla Tranformer.",
"NAIVE-ATT-LAST : same as NAIVE-ATT except that only the last decoder layer performs cross-attention.",
"ADDSGD (Zenkel et al., 2019): a method that adds an extra alignment layer to repredict the to-be-aligned target token.",
"MTL-FULLC (Garg et al., 2019): a method that supervises an attention head with symmetrized NAIVE-ATT alignments in a multitask learning framework.",
"4 https://github.com/lilt/alignment-scripts Method Guided De-En En-Fr Ro-En Zh-En FAST-ALIGN (Dyer et al., 2013) N 25.7 12.1 31.8 -GIZA++ (Och and Ney, 2003) N 17.8 6.1 26.0 18.5 NAIVE-ATT (Garg et al., 2019) N 31.9 18.5 32.9 28.9 NAIVE-ATT-LASTN 28.4 17.7 32.4 26.4 ADDSGD (Zenkel et al., 2019) N 21.2 10.0 27.6 MTL-FULLC (Garg et al., 2019) N 20.2 7.7 26.0 BAO (Zenkel et al., 2020) N 17.9 8.4 24.1 SHIFT-ATT (Chen et al., 2020) N 17.9 6.6 23.9 20.2 MTL-FULLC-GZ (Garg et al., 2019) Y 16.0 4.6 23.1 -BAO-GUIDED (Zenkel et al., 2020) Y 16.3 5.0 23.4 SHIFT-AET (Chen et al., 2020) Y 15.4 4.7 21.2 17.2 MASK-ALIGNN 14.4 4.4 19.5 13.8 Table 1: Alignment Error Rate (AER) scores on four datasets for different alignment methods.",
"BAO (Zenkel et al., 2020): an improved version of ADDSGD that extracts alignments with Bidirectional Attention Optimization.",
"SHIFT-ATT (Chen et al., 2020): a method that induces alignments when the to-be-aligned tatget token is the decoder input instead of the output.",
"We also included three additional baselines with guided training: (1) MTL-FULLC-GZ (Garg et al., 2019) which replaces the alignment labels in MTLFULLC with GIZA++ results, (2) BAO-GUIDED (Zenkel et al., 2020) which uses alignments from BAO for guided alignment training, (3) SHIFTAET (Chen et al., 2020) which trains an additional alignment module with supervision from symmetrized SHIFT-ATT alignments.",
"Table 1 shows the results on four datasets.",
"Our approach significantly outperforms all statistical and neural baselines.",
"Specifically, it improves over GIZA++ by 1.7-6.5 AER points across different language pairs without using any guided alignment loss, making it a good substitute to this commonly used statistical alignment tool.",
"Compared to SHIFTATT , the best neural methods without guided training, our approach achieves a gain of 2.2-6.4 AER points with fewer parameters (as we remove some cross-attention sublayers in the decoder).",
"stantial improvements over all methods.",
"For example, on the Romanian-English dataset, it improves over SHIFT-AET by 1.7 AER points.",
"Recall that our method is fully end-to-end, which does not require a time-consuming process of obtaining pseudo alignments for full training data.",
"Table 2 shows the ablation results on the German-English dataset.",
"As we can see, masked modeling seems to play a critical role since removing it will deteriorate the performance by at least 9.0 AER.",
"We also find that leaky attention and agreement-based training and inference are both important.",
"Removing any of them will significantly diminish human rights throughout the world in 1995 1996 MR i n de r w e l t 1995 \\ 1996",
"Figure 5 shows the attention weights from vanilla and leaky attention and Table 3 presents the norms of the transformed value vectors of each source token for two types of attention.",
"For vanilla attention, we can see large weights on the high frequency token der and the small norm of its transformed value vector.",
"As a result, the target token in will be wrongly aligned to der.",
"While for leaky attention, we observe a similar phenomenon on the leak position [NULL] , and in will not be aligned to any source tokens since the weights on all source tokens are small.",
"This example shows leaky attention can effectively prevent the collector phenomenon.",
"Removing End Punctuation To further investigate the performance of leaky attention, we tested an extraction method that excludes the attention weights on the end punctuation of a source sentence.",
"The reason behind this is that when the source sentence contains the end punctuation, it will act as the collector in most cases.",
"Therefore removing it will Method w/ punc.",
"alleviate the effect of collectors to a certain extent.",
"Table 4 shows the comparison results.",
"For vanilla attention, removing end punctuation obtains a gain of 7.7 AER points.",
"For leaky attention, however, such extraction method brings no improvement on alignment quality.",
"This suggests that leaky attention can effectively alleviate the problem of collectors.",
"Case Study Figure 6 shows the attention weights from four different models for the example in Figure 1. As we have discussed in Section 1, in this example, NMT-based methods might fail to resolve ambiguity when predicting the target token tokyo.",
"From the attention weight matrices, we can see that NMT-based methods (Figures",
"6(b) and",
"6(c)) indeed put high weights wrongly on 1968 in the source sentence.",
"As for MASK-ALIGN , we can see i was born in tokyo in 1968 .",
"that the attention weights are highly consistent with the gold alignment, showing that our method can generate sparse and accurate attention weights.",
"Prediction and Alignment We analyzed the relevance between the correctness of word-level prediction and alignment.",
"We regard a word as correctly predicted if any of its subwords are correct and as correctly aligned if one of its possible alignment is matched.",
"Figure 7 shows the results.",
"We divide target tokens into four categories: 1. cPcA: correct prediction & correct alignment; 2. wPcA: wrong prediction & correct alignment; 3. cPwA: correct prediction & wrong alignment; 4. wPwA: wrong prediction & wrong alignment.",
"Compared with other methods, MASK-ALIGN significantly reduces the alignment errors caused by wrong predictions (wPwA).",
"In addition, the number of the tokens with correct prediction but wrong alignment (cPwA) maintains at a low level, indicating that our model does not degenerate into a target masked language model despite the use of bidirectional target context.",
"Our work is closely related to unsupervised neural word alignment.",
"While early unsupervised neural aligners (Tamura et al., 2014; Alkhouli et al., 2016; Peter et al., 2017) failed to outperform their statistical counterparts such as FAST-ALIGN (Dyer et al., 2013) and GIZA++ (Och and Ney, 2003), recent studies have made significant progress by inducing alignments from NMT models (Garg et al., 2019; Zenkel et al., 2019, 2020; Chen et al., 2020).",
"Our work differs from prior studies in that we design a novel self-supervised model that is capable of utilizing more target context than NMT-based models to generate high quality alignments without using guided training.",
"Our work is also inspired by the success of conditional masked language models (CMLMs) (Ghazvininejad et al., 2019), which have been applied to non-autoregressive machine translation.",
"The CMLM can leverage both previous and future context on the target side for sequence-to-sequence tasks with the masking mechanism.",
"Kasai et al. (2020) extend it with a disentangled context Transformer that predicts every target token conditioned on arbitrary context.",
"By taking the characteristics of word alignment into consideration, we propose to use static-KV attention to achieve masking and aligning in parallel.",
"To the best of our knowledge, this is the first work that incorporates a CMLM into alignment models.",
"We have presented a self-supervised neural alignment model MASK-ALIGN .",
"Our model parallelly masks out and predicts each target token.",
"We propose static-KV attention and leaky attention to achieve parallel computation and address the garbage collectors problem, respectively.",
"Experiments show that MASK-ALIGN achieves new state-of-the-art results without using the guided alignment loss.",
"In the future, we plan to extend our method to directly generate symmetrized alignments without leveraging the agreement between two unidirectional models.",
"This work was supported by the National Key R&D Program of China (No. 2017YFB0202204), National Natural Science Foundation of China (No.61925601, No. 61772302) and Huawei Noah's Ark Lab.",
"We thank all anonymous reviewers for their valuable comments and suggestions on this work."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"method",
"other",
"other",
"objective",
"objective",
"method",
"method",
"objective",
"abstain",
"objective",
"other",
"other"
] |
[
"We introduce SPARTA, a novel neural retrieval method that shows great promise in performance, generalization, and interpretability for open-domain question answering.",
"Unlike many neural ranking methods that use dense vector nearest neighbor search, SPARTA learns a sparse representation that can be efficiently implemented as an Inverted Index.",
"The resulting representation enables scalable neural retrieval that does not require expensive approximate vector search and leads to better performance than its dense counterpart.",
"We validated our approaches on 4 open-domain question answering (OpenQA) tasks and 11 retrieval question answering (ReQA) tasks.",
"SPARTA achieves new state-of-the-art results across a variety of open-domain question answering tasks in both English and Chinese datasets, including open SQuAD, CMRC and etc.",
"Analysis also confirms that the proposed method creates human interpretable representation and allows flexible control over the trade-off between performance and efficiency.",
"Open-domain Question Answering (OpenQA) is the task of answering a question based on a knowledge source.",
"One promising approach to solve OpenQA is Machine Reading at Scale (MRS) (Chen et al., 2017).",
"MRS leverages an information retrieval (IR) system to narrow down to a list of relevant passages and then uses a machine reading comprehension reader to extract the final answer span.",
"This approach, however, is bounded by its pipeline nature since the first stage retriever is not trainable and may return no passage that contains the correct answer.",
"To address this problem, prior work has focused on replacing the first stage retriever with a trainable ranker (Chidambaram et al., 2018; Lee et al., 2018; Wang et al., 2018).",
"End-to-end systems This work was done during an internship at SOCO have also been proposed to combine passage retrieval and machine reading by directly retrieving answer span (Seo et al., 2019; Lee et al., 2019).",
"Despite of their differences, the above approaches are all built on top of the dual-encoder architecture , where query and answer are encoded into fixed-size dense vectors, and their relevance score is computed via dot products.",
"Approximate nearest neighbor (ANN) search is then used to enable real-time retrieval for large dataset (Shrivastava and Li, 2014).",
"In this paper, we argue that the dual-encoder structure is far from ideal for open-domain QA retrieval.",
"Recent research shows its limitations and suggests the importance of modeling complex queries to answer interactions for strong QA performance.",
"Seo et al. (2019) shows that their best performing system underperforms the state-of-the-art due to query-agnostic answer encoding and its over-simplified matching function.",
"Humeau et al. (2019) shows the trade-off between performance and speed when moving from expressive cross-attention in BERT (Devlin et al., 2018) to simple inner product interaction for dialog response retrieval.",
"Therefore, our key research goal is to develop new a method that can simultaneously achieve expressive query to answer interaction and fast inference for ranking.",
"We introduce SPARTA (Sparse Transformer Matching), a novel neural ranking model.",
"Unlike existing work that relies on a sequence-level inner product, SPARTA uses token-level interaction between every query and answer token pair, leading to superior retrieval performance.",
"Concretely, SPARTA learns sparse answer representations that model the potential interaction between every query term with the answer.",
"The learned sparse answer representation can be efficiently saved in an Inverted Index, e.g., Lucene (McCandless et al., 2010), so that one can query a SPARTA index with almost the same speed as a standard search engine and enjoy the more reliable ranking performance without depending on GPU or ANN search.",
"Experiments are conducted on two settings: OpenQA (Chen et al., 2017) that requires phrase-level answers and retrieval QA (ReQA) that requires sentence-level answers (Ahmad et al., 2019).",
"Our proposed SpartaQA system achieves new state-of-the-art results across 15 different domains and 2 languages with significant performance gain, including OpenSQuAD, OpenCMRC and etc.",
"Moreover, model analysis shows that SPARTA exhibits several desirable properties.",
"First SPARTA shows strong domain generalization ability and achieves the best performance compared to both classic IR method and other learning methods in low-resources domains.",
"Second, SPARTA is simple and efficient and achieves better performance than many more sophisticated methods.",
"Lastly, it provides a human-readable representation that is easy to interpret.",
"In short, the contributions of this work include: A novel ranking model SPARTA that offers token-level query-to-answer interaction and enables efficient large-scale ranking.",
"New state-of-the-art experiment results on 11 ReQA tasks and 4 OpenQA tasks in 2 languages.",
"Detailed analyses that reveal insights about the proposed methods, including generalization and computation efficiency.",
"The classical approach for OpenQA depends on knowledge bases (KB)s that are manually or automatically curated, e.g., Freebase KB (Bollacker et al., 2008), NELL (Fader et al., 2014) etc.",
"Semantic parsing is used to understand the query and computes the final answer (Berant et al., 2013; Berant and Liang, 2014).",
"However, KB-based systems are often limited due to incompleteness in the KB and inflexibility to changes in schema (Ferrucci et al., 2010).",
"A more recent approach is to use text data directly as a knowledge base.",
"Dr.QA uses a search engine to filter to relevant documents and then applies machine readers to extract the final answer (Chen et al., 2017).",
"It needs two stages because all existing machine readers, for example, BERT-based models (Devlin et al., 2018), are prohibitively slow (BERT only processes a few thousands of words per second with GPU acceleration).",
"Many attempts have been made to improve the first-stage retrieval performance (Chidambaram et al., 2018; Seo et al., 2019; Henderson et al., 2019; Karpukhin et al., 2020; Chang et al., 2020).",
"The information retrieval community has shown that word embedding matching do not perform well for ad-hoc document search compared to classic methods (Guo et al., 2016; Xiong et al., 2017; Hui et al., 2017).",
"To increase the expressiveness of dual encoders, Xiong et al. (2017) develops kernel function to learn soft matching score at token-level instead of sequence-level.",
"Humeau et al. (2019) proposes Poly-Encoders to enable more complex interactions between the query and the answer by letting one encoder output multiple vectors instead of one vector.",
"Dhingra et al. (2020) incorporates entity vectors and multi-hop reasoning to teach systems to answer more complex questions.",
"(Lee et al., 2020) augments the dense answer representation with learned n-gram sparse feature from contextualized word embeddings, achieving significant improvement compared to the dense-only baseline.",
"Chang et al. (2020) explores various unsupervised pretraining objectives to improve dual-encoders' QA performance in the low-resources setting.",
"Unlike existing work based-on dual-encoders, we focus on learning sparse representation and emphasizing token-level interaction.",
"This is perhaps the most related to the sparse index from Den-SPI (Lee et al., 2020) and DeepCT (Dai and Callan, 2020).",
"Our approach is different because our proposed model is architecturally simpler and is generative so that it will understand words that not appear in the answer document, whereas the one developed at (Lee et al., 2020) only models n-grams appear in the document.",
"MacAvaney et al. (2020) also explores retrieval with sparse representations.",
"Our work is different from theirs in that we decide not to model the query order information, which enables the model to do full ranking.",
"Section 3.4 shows that our system can be easily deployed via inverted index under modern search engines, such as Lucene (McCandless et al., 2010).",
"First, we formally define the problem of answer ranking for question answering.",
"Let q be the input question, and A = { ( a, c ) } be a set of candidate Figure 1: SPARTA Neural Ranker computes token-level matching score via dot product.",
"answers.",
"Each candidate answer is a tuple ( a, c ) where a is the answer text and c is context information about a .",
"The objective is to find model parameter that rank the correct answer as high as possible,",
".i.e: = argmax E [ p (( a , c ) | q )] (1) This formulation is general and can cover many tasks.",
"For example, typical passage-level retrieval systems sets the a to be the passage and leaves c empty (Chen et al., 2017; Yang et al., 2019a).",
"The sentence-level retrieval task proposed at sets a to be each sentence in a text knowledge base and c to be the surrounding text (Ahmad et al., 2019).",
"Lastly, the phrase-level QA system sets a to be all valid phrases from a corpus and c to be the surrounding text (Seo et al., 2019).",
"This work focuses on the same sentence-level retrieval task (Ahmad et al., 2019) since it provides a good balance between precision and memory footprint.",
"Yet note that our methods can be easily applied to the other two settings.",
"In order to achieve both high accuracy and effi-ciency (scale to millions of candidate answers with real-time response), the proposed SPARTA index is built on top of two high-level intuitions.",
"Accuracy: retrieve answer with expressive embedding interaction between the query and answer, i.e., token-level contextual interaction.",
"Efficiency: create query agnostic answer representation so that they can be pre-computed at indexing time.",
"Since it is an offline operation, we can use the most powerful model for indexing and simplify the computation needed at inference.",
"As shown in Figure 1, a query is represented as a sequence of tokens q = [ t 1 , ...t | q | ] and each answer is also a sequence of tokens ( a, c ) = [ c 1 , ..a 1 , ..a | a | , c a +1 , ...c | c | ] .",
"We use a non-contextualized embedding to encode the query tokens to e i , and a contextualized transformer model to encode the answer and obtain contextualized token-level embedding s j : E ( q ) = [ e 1 , ...e | q | ] Query Embedding (2) H ( a, c ) = [ s 1 , ...s | c | ] Answer Embedding (3) Then the matching score f between a query and an answer is computed by: y i = max j [1 , | c | ] ( e Ti s j ) Term Matching (4) ( y i ) = ReLU ( y i + b ) Sparse Feature (5) f ( q, ( a, c )) = | q | (cid:88) i =0 log( ( y i ) + 1) Final Score (6) where b is a trainable bias.",
"The final score between the query and answer is the summation of all individual scores between each query token and the answer.",
"The logarithm operations normalize each individual score and weaken the overwhelmingly large term score.",
"Additionally, there are two key design choices worth of elaboration.",
"flow (Seo et al., 2016), relevance between every query and answer token pair is computed via dot product and max pooling in Eq.",
"4. Whereas in a typical dual-encoder approach, only sequence-level interaction is computed via dot product.",
"Results in our experiment section show that fine-grained interaction is crucial to obtain significant accuracy improvement.",
"Additionally, s j is obtained from powerful bidirectional transformer encoders, e.g. BERT and only needs to be computed at the indexing time.",
"On the other hand, the query embedding is non-contextual, a trade-off needed to enable real-time inference, which is explained in Section 3.4 Sparsity Control Another key feature to enable efficient inference and memory foot print is sparsity.",
"This is achieved via the combination of log , ReLU and b in Eq.",
"5. The bias term is used as a threshold for y i .",
"The ReLU layer forces that only query terms with y i > 0 have impact to the final score, achieving sparse activation.",
"The log operation is proven to be useful via experiments for regularizing individual term scores and leads to better performance and more generalized representation.",
"Implementation In terms of implementation, we use a pretrained 12-layer, 768 hidden size bert-base-uncased as the answer encoder to encode the answer and their context (Devlin et al., 2018).",
"To encode the difference between the answer sequence and its surrounding context, we utilized the segment embedding from BERT, i.e. the answer tokens have segment_id = 1 and the context tokens havesegment_id = 0 .",
"Moreover, the query tokens are embedded via the word embedding from the bert-base-uncased with dimension 768.",
"The training of SPARTA uses cross entropy learning-to-rank loss and maximizes Eq.",
"7. The objective tries to distinguish between the true relevant answer ( a + , c + ) and irrelevant/random answers K for each training query q : J = f ( q, ( a + , c + )) log (cid:88) k K e f ( q, ( a k ,c k )) (7) The choice of negative samples K are crucial for effective learning.",
"Our study uses two types of negative samples: 50% of the negative samples are randomly chosen from the entire answer candidate set, and the rest 50% are chosen from sentences that are nearby to the ground truth answer a .",
"The second case requires the model to learn the fine-grained difference between each sentence candidate instead of only rely on the context information.",
"The parameters to learn include both the query encoder E and the answer encoder H .",
"Parameters are optimized using back propagation (BP) through the neural network.",
"One major novelty of SPARTA is how one can use it for real-time inference.",
"That is for a testing query q = [ t 0 , ...t | q | ] , the ranking score between q and an answer is: LOOKUP ( t, ( a, c )) = log( Eq. 5 ) t V (8) f ( q, ( a, c )) = | q | (cid:88) i =1 LOOKUP ( t i , ( a, c )) (9) Since the query term embedding is non-contextual, we can compute the rank feature ( t, ( a, c )) for every possible term t in the vocabulary V with every answer candidate.",
"The result score is cached in the indexing time as shown in Eq.",
"8. At inference time, the final ranking score can be computed via O(1) look up plus a simple summation as shown in Eq.",
"9. More importantly, the above computation can be efficiently implemented via a Inverted Index (Manning et al., 2008), which is the underlying data structure for modern search engines, e.g. Lucene (McCandless et al., 2010) as shown in Figure 1(b).",
"This property makes it easy to apply SPARTA to real-world applications.",
"It is not hard to see the relationship between SPARTA and classic BM25 based methods.",
"In the classic IR method, only the tokens that appeared in the answer are saved to the Inverted Index.",
"Each term's score is a combination of Term Frequency and Inverted Document Frequency via heuristics (Manning et al., 2008).",
"On the other hand, SPARTA learns which term in the vocabulary should be inserted into the index, and predicts the ranking score directly rather than heuristic calculation.",
"This enables the system to find relevant answers, even when none of the query words appeared in the answer text.",
"For example, if the answer sentence is Bill Gates founded Microsoft\", a SPARTA index will not only contain the tokens in the answer, but also include relevant terms, e.g. who , founder , entrepreneur and etc.",
"SPARTA is also related to generative QA.",
"The scoring between ( a, c ) and every word in the vocabulary V can be understood as the un-normalized probability of log p ( q | a ) = (cid:80) | q | i log p ( t i | a ) with term independence assumption.",
"Past work such as Lewis and Fan (2018); Nogueira et al. (2019) trains a question generator to score the answer via likelihood.",
"However, both approaches focus on auto-regressive models and the quality of question generation and do not provide an end-to-end solution that enables stand-alone answer retrieval.",
"We consider an Open-domain Question Answering (OpenQA) task to evaluate the performance of SPARTA ranker.",
"Following previous work on OpenQA (Chen et al., 2017; Wang et al., 2019; Xie et al., 2020), we experiment with two English datasets: SQuAD (Rajpurkar et al., 2016), Natural Questions (NQ) (Kwiatkowski et al., 2019); and two Chinese datasets: CMRC (Cui et al., 2018), DRCD (Shao et al., 2018).",
"For each dataset, we used the version of Wikipedia where the data was collected from.",
"Preliminary results show that it is crucial to use the right version of Wikipedia to reproduce the results from baselines.",
"We compare the results with previous best models.",
"System-wise we follow the 2-stage ranker-reader structure used in (Chen et al., 2017).",
"Ranker: We split all documents into sentences.",
"Each sentence is treated as a candidate answer a .",
"We keep the surrounding context words of each candidate answer as its context c .",
"We encode at most 512 word piece tokens and truncate the context surrounding the answer sentence with equal window size.",
"For model training, bert-base-uncased is used as the answer encoder for English, and chinese-bert-wwm is used for Chinese.",
"We reuse the word embedding from corresponding BERT model as the term embedding.",
"Adam (Kingma and Ba, 2014) is used as the optimizer for fine-tuning with a learning rate 3e-5.",
"The model is fine-tuned for at most 10K steps and the best model is picked based on validation performance.",
"Reader: We deploy a machine reading comprehension (MRC) reader to extract phrase-level answers from the top-K retrieved contexts.",
"For English tasks, we fine-tune on span-bert (Joshi et al., 2020).",
"For Chinese tasks, we fine-tune on chinese-bert-wwm (Cui et al., 2020).",
"Two additional proven techniques are used to improve performance.",
"First, we use global normalization (Clark and Gardner, 2017) to normalize span scores among multiple passages and make them comparable among each other.",
"Second, distant supervision is used.",
"Concretely, we first use the ranker to find top-10 passages for all training data from Wikipedia corpus.",
"Then every mention of the oracle answers in these contexts are treated as training examples.",
"This can ensure the MRC reader to adapt to the ranker and make the training distribution closer to the test distribution (Xie et al., 2020).",
"Lastly, evaluation metrics include the standard MRC metric: EM and F1-score.",
"Exact Match (EM): if the top-1 answer span matches with the ground truth exactly.",
"F1 Score: we compute word overlapping between the returned span and the ground truth answer at token level.",
"Table 1 and 2 shows the SPARTA performance in OpenQA settings, tested in both English and Chinese datasets.",
"Experimental results show that SPARTA retriever outperforms all existing models and obtains new state-of-the-art results on all four datasets.",
"For OpenSQuAD and OpenNQ, SPARTA OpenCMRC Model F1 EM BERTserini(Xie et al., 2020) 60.9 44.5 BERTserini+DS (Xie et al., 2020) 64.6 48.6 SPARTA 80.2 63.1 OpenDRCD Model F1 EM BERTserini (Xie et al., 2020) 65.0 50.7 BERTserini+DS (Xie et al., 2020) 67.7 55.4 SPARTA 74.6 63.1 Table 2: Results on Chinese Open CMRC and DRCD outperforms the previous best system (Asai et al., 2019) by 2.7 absolute F1 points and 5.1 absolute EM points respectively.",
"For OpenCMRC and OpenDRCD, SPARTA achieves a 15.3 and 6.7 absolute F1 points improvement over the previous best system (Xie et al., 2020).",
"Notably, the previous best system on OpenSQuAD and OpenNQ depends on sophisticated graph reasoning (Asai et al., 2019), whereas the proposed SPARTA system only uses single-hop ranker and require much less computation power.",
"This suggests that for tasks that requires only single-hop reasoning, there is still big improvement room for better ranker-reader QA systems.",
"We also consider Retrieval QA (ReQA), a sentence-level question answering task (Ahmad et al., 2019).",
"The candidate answer set contains every possible sentence from a text corpus and the system is expected to return a ranking of sentences given a query.",
"The original ReQA only contains SQuAD and NQ.",
"In this study, we extend ReQA to 11 different domains adapted from (Fisch et al., 2019) to evaluate both in-domain performance and out-of-domain generalization .",
"The details of the 11 ReQA domains are in Table 3 and Appendix.",
"The in-domain scenarios look at domains that have enough training data (see Table 3).",
"The models are trained on the training data and the evaluation is done on the test data.",
"On the other hand, the out-of-domain scenarios evaluate systems' performance on test data from domains not included in the training, making it a zero-shot learning problem.",
"There are two out-of-domain settings: (1) training data only contain SQuAD (2) training data contain only SQuAD and NQ.",
"Evaluation is carried on all the domains to test systems' ability to generalize to unseen data distribution.",
"For evaluation metrics, we use Mean Reciprocal Rank (MRR) as the criteria.",
"The competing baselines include: BM25 : a strong classic IR baseline that is diffi-cult to beat (Robertson et al., 2009).",
"USE-QA 1 : universal sentence encoder trained for QA task by Google (Yang et al., 2019b).",
"USE-QA uses the dual-encoder architecture and it is trained on more than 900 million mined question-answer pairs with 16 different languages.",
"Poly-Encoder (Poly-Enc): Poly Encoders improves the expressiveness of dual-encoders with two-level interaction (Humeau et al., 2019).",
"We adapted the original dialog model for QA retrieval: two bert-base-uncased models are used as the question and answer encoders.",
"The answer encoder has 4 vector outputs.",
"Table 4 shows the MRR results on the five datasets with in-domain training.",
"SPARTA can achieve the best performance across all domains with a large margin.",
"In terms of average MRR across the five domains, SPARTA is 114.3% better than BM25, 50.6% better than USE-QA and 26.5% better than Poly-Encoders.",
"Two additional insights can be drawn from the results.",
"First, BM-25 is a strong baseline and does not require training.",
"It performs particularly well in domains that have a high-rate of word-overlapping between the answer and the questions.",
"For example, SQuAD's questions are generated by crowd workers who look at the ground truth answer, while ques-1 https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa Data BM25 USE-QA Poly Enc SPARTA (ours) SQuAD 58.0 62.5 64.6 78.5 News 19.4 26.2 28.3 46.6 Trivia 29.0 41.2 39.5 55.5 NQ 19.7 58.2 69.9 77.1 HotPot 23.9 25.5 51.8 63.8 Avg 30.0 42.7 50.8 64.3 Table 4: MRR comparison for the in-domain settings.",
"tion data from NQ/News are generated by question makers who do not see the correct answer.",
"BM25 works particularly well in SQuAD while performing the poorest in other datasets.",
"Similar observations are also found in prior research (Ahmad et al., 2019).",
"Second, the results in Table 4 confirms our hypothesis on the importance of rich interaction between the answer and the questions.",
"Both USE-QA and Poly Encoder use powerful transformers to encode the whole question and model word-order information in the queries.",
"However, their performance is bounded by the simple dot-product interaction between the query and the answer.",
"On the other hand, despite the fact that SPARTA does not model word-order information in the query, it is able to achieve a big performance gain compared to the baselines, confirming the effectiveness of the proposed token-level interaction method in Eq.",
"4. 5.2 Out-of-domain Generalization Table 5 summarized the results for out-of-domain performance comparison.",
"SPARTA trained only on SQuAD outperforms the baselines, achieving 54.1% gain compared to BM25, 26.7% gain compared to USE-QA and 25.3% gain compared to Poly-Encoders in terms of average MRR across 11 different datasets.",
"When SPARTA is trained on SQuAD+NQ, an additional 1.7 MRR improvement is gained compared to SPARTA-SQuAD.",
"We can observe that Poly-Encoder is able to achieve similar in-domain performance for the domains that are included in the training.",
"However, its performance decreases significantly in new domains, a 25.0% drop compared to its full performance for Poly-Encoder that is trained on SQuAD and 29.2% drop when it's trained on SQuAD+NQ.",
"from the training data much better to new domains.",
"When trained on SQuAD, its performance on News, Trivia, NQ, and HotPot is only 19.2% lower than the full performance and 18.3% drop when it's trained on SQuAD+NQ.",
"Also, we note that SPARTA's zero-shot performance on News (MRR=41.2) and Trivia (MRR=45.8) is even better than the full performance of Poly-Encoder (News MRR=28.3 and Trivia MRR=39.5).",
"One common limitation of deep neural network models is poor interpretability.",
"Take dense distributed vector representation for example, one cannot directly make sense of each dimension and has to use dimension reduction and visualization methods, e.g. TSNE (Maaten and Hinton, 2008).",
"On the contrary, the resulting SPARTA index is straightforward to interpret due to its sparse nature.",
"Specifically, we can understand a SPARTA vector by reading the top K words with non-zero f ( t, ( a, c )) , since these terms have the greatest impact to the final ranking score.",
"Table 6 shows some example outputs.",
"It is not hard to note that the generated terms for each answer sentence is highly relevant to both a and c , and contains not keywords that appeared in the answer, but also include terms that are potentially in the query but never appear in the answer itself.",
"Two experts manually inspect the outputs for 500 ( a, c ) data points from Wikipedia, and we summarize the following four major categories of terms that are predicted by SPARTA.",
"Conversational search understanding : the third row is an example.",
"Who appears to the top term, showing it learns Bill Gates is a person so that it's likely to match with Who questions.",
"Keyword identification : terms such as gates, google, magnate, yellowstone have high scores in the generated vector, showing that SPARTA learns which words are important in the answer.",
"Synonyms and Common Sense : benefactor, investors are examples of synonyms.",
"Also even though Utah does not appear in the answer, it is predicted as an important term, showing that SPARTA leverages the world-knowledge from a pretrained language model and knows Yellowstone is related to Utah.",
"Sparsity not only provides interpretability, but also offers flexibility to balance the trade-off of memory footprint vs. performance.",
"When there are memory constraints on the vector size, the SPARTA vector can be easily reduced by only keeping the top-K important terms.",
"Table 7 shows performance on SQuAD and NQ with varying K. The resulting sparse vector representation is very robust to smaller K. When only keeping the top 50 terms in each answer vector, SPARTA achieves 69.5 MRR, a better score than all baselines with only 1 .",
"6% memory footprint compared to Poly-Encoders (768 x 4 dimension).",
"NQ dataset is more challenging and requires more terms.",
"SPARTA achieves a close to the best performance with top-500 terms.",
"In short, we propose SPARTA, a novel ranking method, that learns sparse representation for better open-domain QA.",
"Experiments show that the proposed framework achieves the state-of-the-art performance for 4 different open-domain QA tasks in 2 languages and 11 retrieval QA tasks.",
"This con-firm our hypothesis that token-level interaction is superior to sequence-level interaction for better evidence ranking.",
"Analyses also show the advantages of sparse representation, including interpretability, generalization and efficiency.",
"Our findings also suggest promising future research directions.",
"The proposed method does not support multi-hop reasoning, an important attribute that enables QA systems to answer more complex questions that require collecting multiple evidence passages.",
"Also, current method only uses a bag-of-word features for the query.",
"We expect further gain by incorporating word-order information."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Sarcasm is a sophisticated linguistic phenomenon to express the opposite of what one really means.",
"With the rapid growth of social media, multimodal sarcastic tweets are widely posted on various social platforms.",
"In multimodal context, sarcasm is no longer a pure linguistic phenomenon, and due to the nature of social media short text, the opposite is more often manifested via cross-modality expressions.",
"Thus traditional text-based methods are insufficient to detect multimodal sarcasm.",
"To reason with multimodal sarcastic tweets, in this paper, we propose a novel method for modeling cross-modality contrast in the associated context.",
"Our method models both cross-modality contrast and semantic association by constructing the Decomposition and Relation Network (namely D&R Net).",
"The decomposition network represents the commonality and discrepancy between image and text, and the relation network models the semantic association in cross-modality context.",
"Experimental results on a public dataset demonstrate the effectiveness of our model in multimodal sarcasm detection.",
"Sarcasm is a sophisticated linguistic phenomenon, defined by Merriam-Webster Dictionary as 'The use of words that mean the opposite of what you really want to say, especially in order to insult someone, to show irritation, or to be funny'.",
"It can not only disguise the hostility of the speaker, but also enhance the effect of mockery or humor on the listener (Tay et al., 2018).",
"As an important clue to analyze people's true sentiment and intentions in communication from implicit expressions, automatic sarcasm detection plays a significant role in various applications that require the knowledge of people's sentiment or opinion (Cai et al., 2019), such as customer service, political stance detection,",
"Existing work on sarcasm detection mainly focuses on text data.",
"Early feature engineering approaches rely on the signal indicators of sarcasm, such as syntactic patterns, lexical indicators and special symbols (Tsur et al., 2010; Davidov et al., 2010; Gonzalez-Ibanez et al., 2011).",
"As sarcasm is often associated with implicit contrast or disparity between conveyed sentiment and user's situation in context (Riloff et al., 2013), contextual contrast information at conversation, tweet or word level is also employed to detect sarcasm in text (Bamman and Smith, 2015; Rajadesingan et al., 2015; Joshi et al., 2016).",
"Recently, deep learning based methods are adopted to train end-to-end neural networks (Baziotis et al., 2018; Tay et al., 2018), achieving state-of-the-art performance.",
"With the fast growing and diverse trend of social media, multimodal sarcastic tweets which convey abundant user sentiment are widely posted on various social platforms.",
"There is a great demand for multimodal sarcasm detection to facilitate various applications.",
"However, traditional text-based methods are not applicable to detect multimodal sarcastic tweets (Fig.1).",
"In multimodal context, sarcasm is no longer a pure linguistic phenomenon, but rather the combined expressions of multiple modalities (i.e. text, image, etc.).",
"As the short text in tweet often has insufficient contextual information, contextual contrast implied in multimodal sarcasm is typically conveyed by cross-modality expressions.",
"For example, in Fig.1b, we can not reason about sarcasm intention simply from the short text 'Perfect flying weather in April' until we notice the downpour outside the airplane window in the attached image.",
"Therefore, compared to text-based methods, the essential research issue in multimodal sarcasm detection is the reasoning of cross-modality contrast in the associated situation.",
"Several related work on multimodal sarcasm detection has been proposed (Schifanella et al., 2016; Cai et al., 2019; Castro et al., 2019).",
"However, they mainly focus on the fusion of multimodal data, and did not address the above key research issue in reasoning with multimodal sarcasm.",
"There are still two main research challenges for multimodal sarcasm detection.",
"First, since sarcasm commonly manifests with a contrastive theme, this requires the detection model to have the ability to reason about cross-modality contrast or incongruity of situations.",
"Second, to ensure cross-modality contrast assessed in the associated common ground, the detection model needs the mechanism to concentrate on the semantic associated aspects of situations in cross-modality context.",
"This contextual contrast and semantic association information acquired, in turn, can provide salient evidence to interpret the detection of multimodal sarcasm.",
"To tackle the above challenges, in this paper, we propose a novel method to model both cross-modality contrast and semantic association by constructing the Decomposition and Relation Network (i.e. D&R Net) for multimodal sarcasm detection task.",
"The decomposition network implicitly models cross-modality contrast information via representing the commonality and discrepancy between image and text in tweets.",
"The relation network explicitly captures the semantic association between image and text via a cross-modality attention mechanism.",
"The main contributions of our work are as follows: We identify the essential research issue in multimodal sarcasm detection, and propose a method to model cross-modality contrast in the associated context of multimodal sarcastic tweets.",
"Network (D&R Net) to implicitly represent the contextual contrast and explicitly capture the semantic association between image and text, which provides the reasoning ability and word-level interpretability for multimodal sarcasm detection.",
"We compare our model with the existing state-of-the-art methods, and experimental results on a publicly available dataset demonstrate the effectiveness of our model in multimodal sarcasm detection.",
"Traditional sarcasm detection takes text-based approaches, including feature engineering, context based and neural network models.",
"Earlier feature engineering approaches are based on the insight that sarcasm usually occurs with specific signals, such as syntactic patterns (e.g. using high-frequency words and content words) (Tsur et al., 2010), lexical indicators (e.g. interjections and intensifiers) (Gonzalez-Ibanez et al., 2011), or special symbols (e.g. '?', '!', hashtags and emojis) (Davidov et al., 2010; Felbo et al., 2017).",
"As sarcasm is often associated with an implicit contrast or disparity between conveyed sentiment and user's situation in context (Riloff et al., 2013), some studies rely on this basic character of sarcasm to detect contextual contrast at different linguistic levels, including immediate communicative context between speaker and audience (Bamman and Smith, 2015), historical context between current and past tweets (Rajadesingan et al., 2015; Joshi et al., 2015), or word-level context by computing semantic similarity (Hernandez-Faras et al., 2015; Joshi et al., 2016).",
"Recently, researchers utilize the powerful techniques of neural networks to get more precise semantic representations of sarcastic text and model the sequential information of sarcastic context.",
"Some approaches consider the contextual tweets of target tweet, using RNN model for contextual tweets representation and modeling the relationship between target and contextual tweets for sarcastic text classification (Gonzalez-Ibanez et al., 2011; Zhang et al., 2016).",
"To conceive more indicative information, user embedding (Amir et al., 2016), emotion, sentiment, personality (Poria et al., 2016), speaker's psychological profile (Ghosh and Veale, 2017), cognitive features (Mishra et al., 2017), and syntactic features (Baziotis et al., 2018) are also incorporated into CNN/LSTM models to enhance the performance.",
"Furthermore, to overcome the black box problem of neural network model and reasoning with sarcasm, some novel methods such as neural machine translation framework (Peled and Reichart, 2017), and intra-attention mechanism (Tay et al., 2018) are explored to improve the interpretability of sarcasm detection.",
"With the prevalence of multimodal tweets, multimodal sarcasm detection has gained increasing research attention recently.",
"Schifanella et al. (2016) firstly tackle this task as a multimodal classification problem and concatenate manually designed features of image and text to classify sarcasm.",
"Cai et al. (2019) extend the input modalities with triple features (i.e. text feature, image feature and image attributes), and propose a hierarchical fusion model for the task.",
"Castro et al. (2019) firstly propose video-level multimodal sarcasm detection task and deal with it based on feature engineering via SVM.",
"However, these methods pay more attention to the fusion of multimodal features, and did not consider cross-modality contrast and semantic association information which is essential to deduce multimodal sarcastic tweets.",
"In this paper, we propose a novel method to model the cross-modality contrast and semantic association in multimodal context by constructing the Decomposition and Relation Network (D&R Net), which enables our model to reason with multimodal sarcastic tweets and provides pertinent evidence for interpretation.",
"Fig.2 illustrates the overall architecture of our proposed D&R Net for multimodal sarcasm detection, which is composed of four modules, preprocessing, encoding, decomposition network and relation network.",
"We first preprocess the image and text inputs and extract adjective-noun pairs (ANPs) from each image.",
"We then encode these triple inputs into hidden representations.",
"After that, we learn to represent the commonality and discrepancy between image and text in decomposition network as well as the multi-view semantic association information in relation network.",
"Finally, we feed these cross-modality representations into classification module for multimodal sarcasm detection.",
"Standard image, text and visual attributes (e.g. sun-net, scene, snow) are utilized in the previous multimodal sarcasm detection (Cai et al., 2019).",
"To enhance the image semantic understanding, we practice a better way to get more visual semantic information via extracting extra adjective-noun pairs from each image (e.g. great sunset, pretty scene, fresh snow in Fig.2).",
"Thus, our model accepts triple inputs.",
"where, T ext = [ W j ] Tj , T is the length of text sequence; ANP s = [ P i ] Ni , N is the number of adjective-noun pair, in which each pair P i contains an adjective word A i , a noun word N i and the probability value p i of this kind of ANP existing in the attached Image , P i = [( A i , N i ) , p i ] .",
"In encoding module, we map these triple inputs into hidden representations.",
"All textual words W j , A i , N i are firstly mapped into embedding vectors w j , a i , n i R d .",
"For each text, we utilize the bi-directional long short term memory (BiLSTM) network to represent textual sequence into a hidden representation vector and incorporate the contextual information.",
"It maps word embedding w j into hidden state h wj R d .",
"For each ANP, we directly compute the maxpool-ing result of its adjective and noun word embeddings as the hidden representation.",
"For each image, we adopt a pre-trained convolutional neural network to extract image feature and also encode the result into d -dimensional space.",
"We focus on contextual contrast of multimodal sarcastic tweets and design the decomposition network (D-Net) to represent the commonality and discrepancy of image and text in high-level spaces.",
"The D-Net breaks down the raw visual or textual representation into a shared subspace and unique visual or textual subspace through three layers.",
"The shared layer tends to extract invariant shared features f shared of image and text, and image or text layer is forced to decompose image or text into unique variant contrast features f unique , which can be defined as f shared = W shared f R d s (5) f unique = P f R d u (6) where f is the feature of input modality { image, text } , f image is the raw image encoding representation H m , f text is the last hidden state h wT of BiLSTM which is used as the overall representation of text, and W shared R d s d , P R d u d are projection matrices of shared space, unique visual space and textual space.",
"In multimodal sarcastic tweets, we expect our model to focus more on the opposite between different modality information.",
"Thus, we reinforce discrepancy between image and text, and on the contrary, weaken their commonality.",
"Specifically, we combine the above unique variant contrast features as the cross-modality contrast representation.",
"where denotes the concatenation operation.",
"We propose the relation network (R-Net) to fully capture the contextual association between image and text from multiple views.",
"The relationship between image and text is usually multi-coupled, that is text may involve multiple entities in images, whereas different regions of the image may also involve different text words.",
"We have already extracted multiple ANPs as the visual semantic information, which is beneficial to model multi-view associations between image and text according to different views of ANPs.",
"Thus, we propose the ANP-aware cross-modality attention layer to align textual words and ANPs via utilizing each ANP to query each textual word and computing their pertinence.",
"We first calculate the cross interactive attention matrix S RN T to measure how text words and image ANPs relate.",
"where W R d d is the parameter of bi-linear function, and each score s ij S indicates the semantic similarity between i -th ANP encoding h pi H p and j -th text word encoding h wj H w .",
"We then compute the cross-modality attention weight ij of i -th ANP for j -th textual word by normalizing the i -th row of attention matrix S , and calculate the weighted average of textual hidden states as the i -th ANP-aware textual representation r i R d : ij = e s ij (cid:80) Tj =1 e s ij (9) r i = T (cid:88) j =1 ij h wj (10) Thus, we query the text N times with different ANPs to get multi-view textual representations [ r 1 , r 2 , . . . , r N ] .",
"Our proposed ANP-aware cross-modality attention mechanism is a variant of multi-head attention (Vaswani et al., 2017) and can be considered as the cross-modality adaptation of topic-aware mechanism (Wei et al., 2019), modeling the cross-modality association between image and text from multiple ANP-aware points.",
"Next, we detail how to fuse such representations to get the final text representation.",
"We extract ANPs from each image and only select the Top N ANPs according to their extracted probability values [ p 1 , p 2 , . . . , p N ] .",
"Hence, different textual representations should be influenced by different ANP probability values.",
"Thus, we get the final cross-modality association representation r rel R d by calculating weighted average of these ANP-aware textual representations [ r 1 , r 2 , . . . , r N ] according to the related normalized ANP probability distributions.",
"Finally, we feed the above acquired cross-modality contrast and semantic association representations, denoted as r dec and r rel respectively, into the top fully-connected layer and use the sigmod function for binary sarcasm classification.",
"where w s R 1 (2 d u + d ) , b s R 1 are the parameters of fully-connected layer.",
"Our model optimizes two losses, including classification loss and orthogonal loss.",
"where y i is the ground truth of i -th sample (i.e., 1 for sarcasm and 0 for non-sarcasm ), and y i is the predicted label of our model.",
"In D-Net (Subsection 3.3), we share the same matrix for both image and text to ensure projecting them into the same subspace.",
"Besides, in initialization and training process, to ensure that the decomposed unique subspaces are unrelated or in conflict with each other, we impose their projection matrices P with the additional orthogonal constraint for the shared projection matrix W shared .",
"We convert these orthogonal constraints into the following orthogonal loss:",
"where (cid:107)(cid:107) 2 F denotes the Frobenius norm.",
"We finally minimize the combined loss function: L = L c + L o (17) where is the weight of orthogonal loss.",
"We use a publicly available dataset constructed by Cai et al. (2019) to evaluate our model for multimodal sarcasm detection.",
"Each sample in this dataset is image-text pair.",
"This dataset is collected from Twitter by querying special hashtag (e.g. #sarcasm, #sarcastic, #irony, #ironic etc.) for positive samples (i.e. sarcasm) and the others without such hashtags as negative samples (i.e. non-sarcasm).",
"The dataset has been divided into training set (80%), development set (10%) and test set (10%).",
"Details are given in Table 2.",
"For fair comparison, we adopt the same data preprocessing used in (Cai et al., 2019), replacing the mentions with a certain symbol user , cleaning up samples in which the regular words include 'sar-casm' related words (e.g. sarcasm, sarcastic, irony, ironic ) and co-occur words (e.g. jokes, humor, ex-gag ), and removing the stop words and URLs.",
"We separate the text sentence by NLTK toolkit and embed each token into 200-dimensional word embedding by GloVe (Pennington et al., 2014).",
"For image preprocessing, we first resize it into 224*224 and utilize pre-trained ResNet (He et al., 2016) to extract image feature.",
"We also use SentiBank toolkit 1 to extract 1200 ANPs and select the Top 5 ANPs as the visual semantic information of each image.",
"We encode the multimodal inputs into 200-dimensional hidden space, and set the dimension of invariant shared feature to 40, the dimension of unique variant contrast feature to 40, Finally, we optimize our model by Adam update rule with learning rate 0.01, mini-batch 128, and weight of orthogonal loss 0.5.",
"The dropout and early-stopping tricks are utilized to avoid overfitting.",
"Our work focus on the multimodal sarcasm detection using image and text modalities.",
"Thus, we compare our model with the only two existing related models using the same modalities.",
"MLP+CNN (Schifanella et al., 2016) concatenates multimodal features generated by textual MLP layer and visual CNN model for sarcasm classification, which is the first work on multimodal sarcasm detection.",
"1 ee.columbia.edu/ln/dvmm/vso/download/sentibank.html Hierarchical FM (Cai et al., 2019) takes text, image and image attributes as three modalities and fuses them with a multimodal hierarchical fusion model, which is the state-of-the-art method in multimodal sarcasm detection task.",
"We compare our model with multimodal baseline models with the F1-score and Accuracy metrics.",
"Table 1 shows the comparative results.",
"The MLP+CNN model simply takes the multimodal sarcasm detection as a general multimodal classification task via directly concatenating multimodal features for classification.",
"Thus, it gets the worst performance.",
"Hierarchical FM performs better than MLP+CNN by incorporating additional attributes that provide the visual semantic information and generating better feature representations via a hierarchical fusion framework.",
"However, these multimodal baselines pay more attention to the fusion of multimodal features.",
"In contrast, our D&R Net captures the essence of multimodal sarcasm via modeling cross-modality contrast in the associated context and achieves the best performance.",
"To further explore the effects of multimodal inputs for sarcasm detection, we compare our model with the representative text-based sarcasm detection models and an image-based baseline model.",
"ResNet (He et al., 2016) is widely used in many image classification tasks with prominent performance.",
"As there is no related work on image sarcasm detection, we fine-tune it for image sarcasm classification.",
"CNN (Kim, 2014) is a well-known model for many text classification tasks, which captures n-gram features by multichannel parameterized sliding windows.",
"BiLSTM (Graves and Schmidhuber, 2005) is a popular recurrent neural network to model text sequence and incorporate bidirectional context information.",
"MIARN (Tay et al., 2018) learns the intra-sentence relationship and sequential composition of sarcastic text, which is state-of-the-art method for text-only sarcasm detection.",
"We use F1-score and Accuracy as the evaluation metrics.",
"Table 3 shows the comparative results of our model and these unimodal baseline models.",
"Though ResNet demonstrates the superior performance in many image classification tasks, it performs relatively poor in sarcasm detection task.",
"It is because that the sarcasm intention or visual contrast context in the image is usually unobvious.",
"CNN and BiLSTM just treat the sarcasm detection task as a text classification task, ignoring the contextual contrast information.",
"Thus, their performances are worse than MIARN, which focuses on textual context to model the contrast information between individual words and phrases.",
"However, due to the nature of short text, relying on textual information is often insufficient, especially in multimodal tweets where cross-modality context relies the most important role.",
"Our D&R Net performs better than unimodal baselines, demonstrating the usefulness of modeling multiple modality information in providing additional cues through reasoning contextual contrast and association.",
"To evaluate the performance of each component used in our D&R Net, we conduct the detailed ablation studies on various variants of our model.",
"The ablation results are shown in Table 4.",
"In general, we find those variants underperform our model.",
"The most obvious declines come from the direct removal of our two core modules, D-Net and R-Net (see row 1, 3).",
"Comparing these two variants, we find that removing D-Net has greater performance drop than removing R-Net.",
"This suggests that modeling the cross-modality contrast in D-Net is more useful than cross-modality association in R-Net.",
"After removing the D-Net, the model only accepts the text and ANPs inputs.",
"Thus we Variant Evaluation Metric F1 Acc (cid:52) F1 (cid:52) Acc D&R Net 80.60 84.02 -1 D-Net 77.63 82.27 -2.97 -1.75 2 + Image 79.10 82.73 -1.50 -1.29 3 R-Net 79.90 83.10 -0.70 -0.92 4 + ANPs 78.68 83.11 -1.92 -0.91 5 ANP, +Attribute 79.52 83.12 -1.08 -0.90 6 ANP-P.F., +MaxPool 79.80 83.27 -0.80 -0.75 7 ANP-P.F., +AvgPool 79.86 83.42 -0.74 -0.60 Table 4: Ablation results of our D&R Net further incorporate image information via directly concatenating image encoding in the final fusion layer (see row 2).",
"The improvement compared with D-Net shows the effectiveness of using image modality for multimodal sarcasm detection.",
"Similarly, we also add the representation of ANPs to the fusion layer after removing the R-Net module (see row 4).",
"However, the performance unexpectedly continues to decrease.",
"One possible reason for this is that the fusion of ANPs affects the original decomposition results in spite of using triple inputs.",
"It is worth mentioning that replacing our ANPs with noun attributes used in (Cai et al., 2019) underper-forms our model (see row 5).",
"This result indicates that ANPs are more useful in modeling semantic association between image and text compared with noun attributes.",
"It is because that the adjective-noun words in ANPs are more semantically informative than noun-only words.",
"Finally, we notice that our ANP-probability fusion (i.e. ANP-P.F.) strategy provides a means for obtaining reasonable performance compared with standard pooling operations, MaxPool and AvgPool (see row 6, 7), with ANP-probability weighted average performing the best.",
"In this section, we provide case studies through several practical examples to illustrate that our D&R Net really learns to reason multimodal sarcastic tweets with interpretability.",
"Fig.3 shows some multimodal non-sarcasm and sarcasm examples that our model correctly predicts.",
"For those text-only or image-only models, it's almost impossible to detect the sarcasm intention of Fig.3a and 3b.",
"We also show the results of the extracted ANPs from each image and these ANPs actually provide useful information for sarcasm This is so beautiful.",
"detection.",
"For example, the ANPs heavy snow , cloudy mountains , minsty winter of Fig.3a show the great conflict with text word 'Spring', conveying the strong intention of sarcasm.",
"In addition, our extracted ANPs are more semantically meaningful than the noun-only attributes used in (Cai et al., 2019).",
"The wet road and empty street are more informative than noun-only words road and street in Fig.3b.",
"The cute girls and energetic performance are more in line with the text words 'so beautiful' compared with noun-only words girls and performance in Fig.3d to discriminate between sarcasm and non-sarcasm.",
"Our proposed ANP-aware cross-modality attention mechanism explicitly calculates the cross interactive attention between text words and image ANPs, providing the explainable reasoning evidence for sarcasm detection.",
"We further illustrate this attention mechanism by visualizing its outputs on two multimodal sarcastic tweets in Fig.4.",
"The results show that our proposed attention mechanism works well for multimodal sarcasm detection by explicitly identify the relationship between image regions and text words.",
"For instance, in Fig.4a, the user satirically mentions eclipse for too many clouds covering the sun.",
"Our D&R Net accurately detects sarcasm intention via focusing on the text words 'eclipse',",
"'!', 'EclipseDay' with multiple visual semantic ANP views: stormy , fluffy , lovely and rainy clouds .",
"In Fig.4b, our model pays more attention to the textual phrase 'these lovely books' with stupid sign, strange sign , and bad sign ANPs which refer to the emoji in the attached image.",
"Consequently, it is easy for our model to detect the sarcasm intention that the books are NOT 'lovely' at all.",
"In this paper, we identify the essential research issue in multimodal sarcasm detection.",
"To model the cross-modality contrast in the associated context of multimodal sarcastic tweets, we propose the D&R Net to represent the commonality and discrepancy between image and text and multi-view semantic associations in cross-modality context.",
"Our model is capable of reasoning multimodal sarcastic tweets with word-level interpretation.",
"Experimental results on a public dataset show that our model achieves the state-of-the-art performance compared with the existing models.",
"This work is supported in part by the Ministry of Science and Technology of China under Grants #2016QY02D0305 and #2018ZX10201001, and National Natural Science Foundation of China under Grants #11832001, #71702181 and #71621002."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"other"
] |
[
"Aspect-based Sentiment Analysis (ABSA) aims to identify the aspect terms, their corresponding sentiment polarities, and the opinion terms.",
"There exist seven subtasks in ABSA.",
"Most studies only focus on the subsets of these subtasks, which leads to various complicated ABSA models while hard to solve these subtasks in a unified framework.",
"In this paper, we redefine every subtask target as a sequence mixed by pointer indexes and sentiment class indexes, which converts all ABSA subtasks into a unified generative formulation.",
"Based on the unified formulation, we exploit the pre-training sequence-to-sequence model BART to solve all ABSA subtasks in an end-to-end framework.",
"Extensive experiments on four ABSA datasets for seven subtasks demonstrate that our framework achieves substantial performance gain and provides a real unified end-to-end solution for the whole ABSA subtasks, which could benefit multiple tasks 1 .",
"Aspect-based Sentiment Analysis (ABSA) is the fine-grained Sentiment Analysis (SA) task, which aims to identify the aspect term ( a ), its corresponding sentiment polarity ( s ), and the opinion term ( o ).",
"For example, in the sentence The drinks are always well made and wine selection is fairly priced , the aspect terms are drinks and wine selection , and their sentiment polarities are both positive, and the opinion terms are well made and fairly priced .",
"Based on the combination of the a , s , o , there exist seven subtasks in ABSA.",
"We summarize these subtasks in Figure",
"1. Specifically, their definitions are as follows: Equal contribution.",
"Aspect Term Extraction( AE ): Extracting all the aspect terms from a sentence.",
"Opinion Term Extraction ( OE ): Extracting all the opinion terms from a sentence.",
"Aspect-level Sentiment Classification ( ALSC ): Predicting the sentiment polarities for every given aspect terms in a sentence.",
"Aspect-oriented Opinion Extraction ( AOE ): Extracting the paired opinion terms for every given aspect terms in a sentence.",
"Aspect Term Extraction and Sentiment Classification ( AESC ): Extracting the aspect terms as well as the corresponding sentiment polarities simultaneously.",
"Pair Extraction ( Pair ): Extracting the aspect terms as well as the corresponding opinion terms simultaneously.",
"Triplet Extraction ( Triplet ): Extracting all aspects terms with their corresponding opinion terms and sentiment polarity simultaneously.",
"Although these ABSA subtasks are strongly related, most of the existing work only focus 1 3 subtasks individually.",
"The following divergences make it difficult to solve all subtasks in a unified framework.",
"1. Input: Some subtasks ( AE , OE , AESC , Pair and Triplet ) only take the text sentence as input, while the remained subtasks ( ALSC and AOE ) take the text and a given aspect term as input.",
"2. Output: Some tasks ( AE , OE , ALSC , AOE ) only output a certain type from a , s or o , while the remained tasks ( AESC , Pair and Triplet ) return compound output as the combination of a , s and o .",
"3. Task Type: There are two kinds of tasks: extraction task (extracting aspect and opinion) and classification task (predicting sentiment).",
"Because of the above divergences, a myriad of previous works only focus on the subset of these subtasks.",
"However, the importance of solving the whole ABSA subtasks in a unified framework remains significant.",
"Recently, several works make attempts on this track.",
"Some methods(Peng et al., 2020; Mao et al., 2021) apply the pipeline model to output the a , s , o from the inside sub-models separately.",
"However, the pipeline process is not end-to-end.",
"Another line follows the sequence tagging method by extending the tagging schema (Xu et al., 2020).",
"However, the compositionality of candidate labels hinders the performance.",
"In conclusion, the existing methods can hardly solve all the subtasks by a unified framework without relying on the sub-models or changing the model structure to adapt to all ABSA subtasks.",
"Motivated by the above observations, we propose a unified generative framework to address all the ABSA subtasks.",
"We first formulate all these subtasks as a generative task, which could handle the obstacles on the input, output, and task type sides and adapt to all the subtasks without any model structure changes.",
"Specifically, we model the extraction and classification tasks as the pointer indexes and class indexes generation, respectively.",
"Based on the unified task formulation, we use the sequence-to-sequence pre-trained model BART (Lewis et al., 2020) as our backbone to generate the target sequence in an end-to-end process.",
"To validate the effectiveness of our method, we conduct extensive experiments on public datasets.",
"The comparison results demonstrate that our proposed framework outperforms most state-of-the-art (SOTA) models in every subtask.",
"In summary, our main contributions are as follows: We formulate both the extraction task and classification task of ABSA into a unified index generation problem.",
"Unlike previous unified models, our method needs not to design specific decoders for different output types.",
"With our re-formulation, all ABSA subtasks can be solved in sequence-to-sequence framework, which is easy-to-implement and can be built on the pre-trained models, such as BART.",
"We conduct extensive experiments on four public datasets, and each dataset contains a subset of all ABSA subtasks.",
"To the best of our knowledge, it is the first work to evaluate a model on all ABSA tasks.",
"The experimental results show that our proposed framework significantly outperforms recent SOTA methods.",
"In this section, we first review the existing studies on single output subtasks, and then turn to studies focusing on the compound output subtasks.",
"Some researches mainly focus on the single output subtasks.",
"The AE , OE , ALSC and AOE subtasks only output one certain type from a , s or o .",
"AE Most studies treat AE subtask as a sequence tagging problem (Li and Lam, 2017; Xu et al., 2018; Li et al., 2018b).",
"Recent works explore sequence-to-sequence learning on AE subtask, which obtain promissing results especially with the pre-training language models (Ma et al., 2019; Li et al., 2020).",
"OE Most studies treat OE subtask as an auxiliary task (Wang et al., 2016a, 2017; Wang and Pan, 2018; Chen and Qian, 2020; He et al., 2019).",
"Most works can only extract the unpaired aspect and opinion terms 2 .",
"In this case, opinion terms are independent of aspect terms.",
"ALSC Tang et al. (2016a) use the long short term memory (LSTM) network to enhance the interactions between aspects and context words.",
"Wang et al. (2016b); Liu and Zhang (2017); Ma et al. (2017); Tay et al. (2018) incorporate the attention mechanism into the LSTM-based neural network models to model relations of aspects and their contextual words.",
"Other model structures such as convolutional neural network (CNN) (Li et al., 2018a; Xue and Li, 2018), gated neural network (Zhang et al., 2016; Xue and Li, 2018), memory neural 2 It is also referred to as the AE-OE co-Extraction.",
"AOE This subtask is first introduced by Fan et al. (2019) and they propose the datasets for this subtask.",
"Most studies apply sequence tagging method for this subtask (Wu et al., 2020; Pouran Ben Veyseh et al., 2020).",
"Some researchers pay more attention and efforts to the subtasks with compound output.",
"We review them as follows: AESC .",
"One line follows pipeline method to solve this problem.",
"Other works utilize unified tagging schema (Mitchell et al., 2013; Zhang et al., 2015; Li et al., 2019) or multi-task learning (He et al., 2019; Chen and Qian, 2020) to avoid the error-propagation problem (Ma et al., 2018).",
"Span-based AESC works are also proposed recently (Hu et al., 2019), which can tackle the sentiment inconsistency problem in the unified tagging schema.",
"Pairs Zhao et al. (2020) propose to extract all ( a , o ) pair-wise relations from scratch.",
"They propose a multi-task learning framework based on the span-based extraction method to handle this subtask.",
"Triplet This subtask is proposed by Peng et al. (2020) and gains increasing interests recently.",
"Xu et al. (2020) design the position-aware tagging schema and apply model based on CRF (Lafferty et al., 2001) and Semi-Markov CRF (Sarawagi and Cohen, 2004).",
"However, the time complexity limits the model to detect the aspect term with long-distance opinion terms.",
"Mao et al. (2021) formulate Triplet as a two-step MRC problem, which applies the pipeline method.",
"The sequence-to-sequence framework has been long studied in the NLP field to tackle various tasks (Sutskever et al., 2014; Cho et al., 2014; Vinyals et al., 2015; Luong et al., 2015).",
"Inspired by the success of PTMs (pre-trained models) (Qiu et al., 2020; Peters et al., 2018; Devlin et al., 2019; Brown et al., 2020), Song et al. (2019); Raffel et al. (2020); Lewis et al. (2020) try to pre-train sequence-to-sequence models.",
"Among them, we use the BART (Lewis et al., 2020) as our backbone, while the other sequence-to-sequence pre-training models can also be applied in our architecture to use the pointer mechanism (Vinyals et al., 2015), such as MASS (Song et al., 2019).",
"BART is a strong sequence-to-sequence pretrained model for Natural Language Generation (NLG).",
"BART is a denoising autoencoder composed of several transformer (Vaswani et al., 2017) encoder and decoder layers.",
"It is worth noting that the BART-Base model contains a 6-layer encoder and 6-layer decoder, which makes it similar number of parameters 3 with the BERT-Base model.",
"BART is pretrained on denoising tasks where the input sentence is noised by some methods, such as masking and permutation.",
"The encoder takes the noised sentence as input, and the decoder will restore the original sentence in an autoregressive manner.",
"Although there are two types of tasks among the seven ABSA subtasks, they can be formulated under a generative framework.",
"In this part, we first introduce our sequential representation for each ABSA subtask.",
"Then we detail our method, which utilizes BART to generate these sequential representations.",
"As depicted in Figure 1, there are two types of tasks, namely the extraction and classification, whose target can be represented as a sequence of pointer indexes and class indexes, respectively.",
"Therefore, we can formulate these two types of tasks in a unified generative framework.",
"We use a , s , o , to represent the aspect term, sentiment polarity,and opinion term, respectively.",
"Moreover, we use the superscript s and e to denote the start index and end index of a term.",
"For example, o s , a e represent the start index of an opinion term o and the end index of an aspect term a .",
"We use the s p to denote the index of sentiment polarity class.",
"The target sequence for each subtask is as follows: AE : Y = [ a s 1 , a e 1 , ..., a si , a ei , ... ] , OE : Y = [ o s 1 , o e 1 , ..., o si , o ei , ... ] , AESC : Y = [ a s 1 , a e 1 , s p 1 , ..., a si , a ei , s pi , ... ] , Pair : Y = [ a s 1 , a e 1 , o s 1 , o e 1 , ..., a si , a ei , o si , o ei ,... ] , Triplet : Y = [ a s 1 , a e 1 , o s 1 , o e 1 , s p 1 , ..., a si , a ei , o si , o ei , s pi , ... ] , The above subtasks only rely on the input sentence, while for the ALSC and AOE subtasks, they also depend on a specific aspect term a .",
"Instead of putting the aspect term on the input side, we put 3 Because of the cross-attention between encoder and decoder, the number of parameters of BART is about 10% larger than its counterpart of BERT (Lewis et al., 2020).",
"are as follows:",
"ALSC : Y = [ a s , a e , s p ] , AOE : Y = [ a s , a e , o s 1 , o e 1 , ..., o si , o ei , ... ] , where the underlined tokens are given during inference.",
"Detailed target sequence examples for each subtask are presented in Figure",
"3. 3.2 Our Model As our discussion in the last section, all subtasks can be formulated as taking the X = [ x 1 , ..., x n ] as input and outputting a target sequence Y = [ y 1 , ..., y m ] , where y 0 is the start-of-the-sentence token.",
"Therefore, different ABSA subtasks can be formulated as: P ( Y | X ) = m (cid:89) t =1 P ( y t | X, Y <t ) .",
"To get the index probability distribution P t = P ( y t | X, Y <t ) for each step, we use a model composed of two components: (1) Encoder ; (2) Decoder .",
"Encoder The encoder part is to encode X into vectors H e .",
"We use the BART model, therefore, the start of sentence ( < s > ) and the end of sentence ( < /s > ) tokens will be added to the start and end of X , respectively.",
"We ignore the < s > token in our equations for simplicity.",
"The encoder part is as follows: H e = BARTEncoder([ x 1 , ..., x n ]) , (2) where H e R n d , and d is the hidden dimension.",
"Decoder The decoder part takes the encoder outputs H e and previous decoder outputs Y <t as inputs to get P t .",
"However, the Y <t is an index sequence.",
"Therefore, for each y t in Y <t , we first need to use the following Index2Token module to conduct a Dataset 14res 14lap 15res 16res Subtasks # s # a # o # p # s # a # o # p # s # a # o # p # s # a # o # p D 17 train 3044 3699 3484 3048 2373 2504 1315 1199 1210 --AE , OE , ALSC , AESC test 800 1134 1008 800 654 674 685 542 510 --D 19 train 1627 2643 -1158 1634 -754 1076 -1079 1512 --AOE test 500 865 -343 482 -325 436 -329 457 --D 20 a train 1300 -2145 920 -1265 593 -923 842 -1289 AE , OE , ALSC , AOE , AESC , Pair , Triplet dev 323 -524 228 -337 148 -238 210 -316 test 496 -862 339 -490 318 -455 320 -465 D 20 b train 1266 -2338 906 -1460 605 -1013 857 -1394 AE , OE , ALSC , AOE , AESC , Pair , Triplet dev 310 -577 219 -346 148 -249 210 -339 test 492 -994 328 -543 148 -485 326 -514 Table 1: The statistics of four datasets, where the # s , # a , # o , # p denote the numbers of sentences, aspect terms, opinion terms, and the <a , o> pairs, respectively.",
"y t = (cid:40) X y t , if y t is a pointer index , C y t n , if y t is a class index , (3) where C = [ c 1 , ..., c l ] is the class token list 4 .",
"After that, we use the BART decoder to get the last hidden state h dt = BARTDecoder( H e ; Y <t ) , (4) where h dt R d .",
"With h dt , we predict the token probability distribution P t as follows: E e = BARTTokenEmbed( X ) , (5) H e = MLP( H e ) , (6) H e = H e + (1 ) E e , (7) C d = BARTTokenEmbed( C ) , (8) P t = Softmax([ H e ; C d ] h dt ) , (9) where E e , H e , H e , H e R n d ; C d R l d ; and P t R ( n + l ) is the final distribution on all indexes.",
"During the training phase, we use the teacher forcing to train our model and the negative log-likelihood to optimize the model.",
"Moreover, during the inference, we use the beam search to get the target sequence Y in an autoregressive manner.",
"After that, we need to use the decoding algorithm to convert this sequence into the term spans and sentiment polarity.",
"We use the Triplet task as an example and present the decoding algorithm in Algorithm 1, the decoding algorithm for other tasks are much depicted in the Supplementary Material.",
"We evaluate our method on four ABSA datasets.",
"All of them are originated from the Semeval Challenges (Pontiki et al., 2014a,b,c), where only the aspect terms and their sentiment polarities are labeled.",
"The first dataset( D 17 5 ) is annotated by Wang et al. (2017), where the unpaire opinion terms are labeled.",
"The second dataset( D 19 ) is annotated by Fan et al. (2019), where they pair opinion terms with 4 In our implement, y t [1 , n + l ] .",
"corresponding aspects.",
"The third dataset( D 20 a ) is from Peng et al. (2020).",
"They refine the data in <a , o , s> triplet form.",
"The fourth dataset( D 20 b ) from Xu et al. (2020) is the revised variant of Peng et al. (2020), where the missing triplets with overlapping opinions are corrected.",
"We present the statistics for these four datasets in Table",
"1. 4.2 Baselines To have a fair comparison, we summarize top-performing baselines of all ABSA subtasks.",
"Given different ABSA subtasks, datasets, and experimental setups, existing baselines can be separated into three groups roughly as shown in Table",
"2. The baselines in the first group are conducted on D 17 dataset, covering the AE , OE , ALSC , and AESC subtasks.",
"Span-based method SPAN-BERT (Hu et al., 2019) and sequence tagging method, IMN-BERT (He et al., 2019) and RACL-BERT (Chen and Qian, 2020), are selected.",
"Specifically, the IMN-BERT model is reproduced by Chen and Qian (2020).",
"All these baselines are implemented on BERT-Large.",
"The baselines of the second group are conducted on D 19 dataset, mainly focusing on AOE subtask.",
"Interestingly, we find that sequence tagging method is the main solution for this subtask (Fan et al., 2019; Wu et al., 2020; Pouran Ben Veyseh et al., 2020).",
"The baselines of the third group are mainly conducted on D 20 a and D 20 b datasets, which could cover almost all the ABSA subtasks except for one certain subtask depending on the baseline structures.",
"For the following baselines: RINANTE (Dai and Song, 2019), CMLA (Wang et al., 2017), Li-unified (Li et al., 2019), the suffix + in Table 2 denotes the corresponding model variant modified by Peng et al. (2020) for being capable of AESC , Pair and Triplet .",
"Following previous studies, we use different metrics according to different subtasks and datasets.",
"Specifically, for the single output subtasks AE , OE , and AOE , the prediction span would be considered as correct only if it exactly matches the start and the end boundaries.",
"For the ALSC subtask, we require the generated sentiment polarity of the given aspect should be the same as the ground truth.",
"As for compound output subtasks, AESC , Pair and Triplet , a prediction result is correct only when all the span boundaries and the generated sentiment polarity are accurately identified.",
"We report the precision (P), recall (R), and F1 scores for all experiments 6 .",
"On D 17 dataset (Wang et al., 2017), we compare our method for AE , OE , ALSC , and AESC .",
"The comparison results are shown in Table",
"3. Most of our results achieve better or comparable results to 6 Due to the limited space, we would present detailed experiments for each dataset in the Supplementary Material.",
"baselines.",
"However, these baselines yield competitive results based on the BERT-Large pre-trained models.",
"While our results are achieved on the BART-Base model with almost half parameters.",
"This shows that our framework is more suitable for these ABSA subtasks.",
"On D 19 dataset (Fan et al., 2019), we compare our method for AOE .",
"The comparison results are shown in Table",
"4. We can observe that our method achieves significant P/R/F1 improvements on 14res, 15res, and 16res.",
"Additionally, we notice that our F1 score on 14lap is close to the previous SOTA result.",
"This is probably caused by the dataset domain difference as the 14lap is the laptop comments while the others are restaurant comments.",
"On D 20 a dataset (Peng et al., 2020), we compare our method for AESC , Pair , and Triplet .",
"The comparison results are shown in Table",
"5. We can observe that our proposed method is able to outperform other baselines on all datasets.",
"Specifically, we achieve the better results for Triplet , which demonstrates the effectiveness of our method on capturing interactions among aspect terms, opinion terms, and sentiment polarities.",
"We also observe that the Span-based methods show superior performance to sequence tagging methods.",
"This may be caused by the higher compositionality of candidate labels in sequence tagging methods (Hu et al., 2019).",
"As the previous SOTA method, the Dual-MRC shows competitive performance by utilizing the span-based extraction method and the MRC mechanism.",
"However, their inference process is not an end-to-end process.",
"On D 20 b dataset (Xu et al., 2020), we compare our method for Triplet .",
"The comparison results can be found in Table",
"6. Our method achieves the best results with nearly 7 F1 points improvements on 14res, 15res, and 16res.",
"Our method achieves nearly 13, 9, 7, 12 points improvements on each dataset for the recall scores compared with other baselines.",
"This also explains the drop performance of the precision score.",
"Since D 20 b is refined from D 20 a , we specifically compare the Triplet results of the corresponding dataset in D 20 a and D 20 b .",
"Interestingly, we discover that all baselines have a much bigger performance change on 15res.",
"We conjecture the distribution differences may be the cause reason.",
"In conclusion, all the experiment results confirm that our proposed method, which unifies the training and the inference to an end-to-end generative framework, provides a new SOTA solution for the whole ABSA task.",
"To better understand our proposed framework, we conduct analysis experiments on the D 20 b dataset (Xu et al., 2020).",
"To validate whether our proposed framework could adapt to the generative ABSA task, we metric the invalid predictions for the Triplet .",
"Specifically, since the Triplet requires the prediction format like [ a s , a e , o s , o e , s p ] , it is mandatory that one valid triplet prediction should be in length 5, noted as 5-len, and obviously all end index should be larger than the corresponding start index, noted as ordered prediction.",
"We calculate number of non 5 len total prediction , referred to as the Invalid size, and the number of non ordered prediction total 5 len prediction , referred to as the Invalid order.",
"The Invalid token means the a s is not the start of a token, instead, it is the index of an inside subword.",
"From Table 7, we can observe that BART could learn this task form easily as the low rate for all the three metrics, which demonstrate that the generative framework for ABSA is not only a theoretically unified task form but also a realizable framework in practical.",
"We remove these invalid predictions in our implementation of experiments.",
"As shown in Table 4, we give some analysis on the impact of the beam size, as we are a generation method.",
"However, the beam size seems to have little impact on the F1 scores.",
"This paper summarizes the seven ABSA subtasks and previous studies, which shows that there exist divergences on all the input, output, and task type sides.",
"Previous studies have limitations on handling all these divergences in a unified framework.",
"We propose to convert all the ABSA subtasks to a unified generative task.",
"We implement the BART to generate the target sequence in an end-to-end process based on the unified task formulation.",
"We conduct massive experiments on public datasets for seven ABSA subtasks and achieve significant improvements on most datasets.",
"The experimental results demonstrate the effectiveness of our method.",
"Our work leads to several promising directions, such as sequence-to-sequence framework on other tasks, and data augmentation.",
"We would like to thank the anonymous reviewers for their insightful comments.",
"The discussion with colleagues in AWS Shanghai AI Lab was quite fruitful.",
"We also thank the developers of fastNLP 7 and fitlog 8 .",
"This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106700) and National Natural Science Foundation of China (No. 62022027).",
"For the consideration of ethical concerns, we would make detailed description as follows:",
"(1) All the experiments are conducted on existing datasets, which are derived from public scientific papers.",
"(2) We describe the characteristics of the datasets in a specific section.",
"Our analysis is consistent with the results.",
"(3) Our work does not contain identity characteristics.",
"It does not harm anyone.",
"(4) Our experiments do not need a lot of computer resources compared to pre-trained models.",
"(5) We will open source all our code."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"method",
"objective",
"objective",
"method",
"objective",
"method",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"result",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Operational risk management is one of the biggest challenges nowadays faced by financial institutions.",
"There are several major challenges of building a text classification system for automatic operational risk prediction, including imbalanced labeled/unlabeled data and lacking interpretability.",
"To tackle these challenges, we present a semi-supervised text classification framework that integrates multi-head attention mechanism with Semi-supervised variational inference for Operational Risk Classification (SemiORC).",
"We empirically evaluate the framework on a real-world dataset.",
"The results demonstrate that our method can better utilize unlabeled data and learn visually interpretable document representations.",
"SemiORC also outperforms other baseline methods on operational risk classification.",
"In the decade since the global financial crisis, banks and regulators have become increasingly alert to operational risks (OR).",
"However, the banks still struggle to deal with operational risks effectively (Hoffman, 2002).",
"It is reported that major banks global wide have suffered nearly $210 billion in operational risk losses since 2011 1 .",
"Operational risks refer to the risks of loss due to errors, breaches, interruptions or damages caused by people, internal processes, systems or external events (Coleman, 2010).",
"One of the daily jobs of risk officers is screening potential operational risks from a massive amount of online news outlets.",
"Therefore, there is an urgent need for financial organizations to use artificial intelligence methods for OR prediction.",
"While this task can be easily formulated as a classic document classification problem, there are 1 https://www.bain.com/insights/ how-banks-can-manage-operational-risk/ at least two challenges in designing such an intelligent OR prediction system.",
"First, acquiring labels from risk officers is time-costly, and there is no standard labeled dataset for this task.",
"Second, providing explanations is critical for OR prediction as risk officers cannot solely rely on prediction outcomes for subsequent decision making.",
"Therefore, these practical issues call for an interpretable semi-supervised text classification framework for OR prediction.",
"However, little prior literature has specifically studied these issues in one framework.",
"To tackle the above-mentioned practical challenges, we propose a semi-supervised text classification model based on the semi-supervised variational autoencoder (SemiVAE) (Kingma et al., 2014) and multi-head attention mechanism (Vaswani et al., 2017) for OR prediction task.",
"SemiVAE allows effective learning of latent representation from both labeled and unlabeled data, and multi-head attention mechanism produces the direct visualization of informative words associated with multi-label predictive outcomes.",
"Learning the model parameters is effective and scalable under the variational inference method.",
"This paper contributes to the burgeoning body of research on using NLP techniques in key financial applications.",
"For example, the prior study leverages the textual features in firm annual reports to predict a firm's stock price volatility using firm annual reports (Kogan et al., 2009) and earnings announcement transcripts (Qin and Yang, 2019).",
"Other researches make use of news articles and social media data to predict financial markets variables, such as stock return, firm performance, default prediction and market sentiment (Tetlock, 2007; Schumaker and Chen, 2009; Ding et al., 2015; Luo et al., 2018).",
"It is worth emphasizing that the pre-requisites of using NLP in key financial applications are effective and transparent.",
"In many cases, it requires extensive domain expertise to annotate the variable of interests.",
"Moreover, z i <latexit sha1_base64=\"UY6E2icMCokwsGMbJyUh3kjb+CQ=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LLoxmUF+4CmlMn0ph06mYSZiVBDf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSATXxnW/ndLa+sbmVnm7srO7t39QPTxq6zhVDFssFrHqBlSj4BJbhhuB3UQhjQKBnWBym/udR1Sax/LBTBPsR3QkecgZNVby/YiacRBmT7MBH1Rrbt2dg6wSryA1KNAcVL/8YczSCKVhgmrd89zE9DOqDGcCZxU/1ZhQNqEj7FkqaYS6n80zz8iZVYYkjJV90pC5+nsjo5HW0yiwk3lGvezl4n9eLzXhdT/jMkkNSrY4FKaCmJjkBZAhV8iMmFpCmeI2K2FjqigztqaKLcFb/vIqaV/UPbfu3V/WGjdFHWU4gVM4Bw+uoAF30IQWMEjgGV7hzUmdF+fd+ViMlpxi5xj+wPn8AYJOkfo=</latexit> <latexit sha1_base64=\"UY6E2icMCokwsGMbJyUh3kjb+CQ=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LLoxmUF+4CmlMn0ph06mYSZiVBDf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSATXxnW/ndLa+sbmVnm7srO7t39QPTxq6zhVDFssFrHqBlSj4BJbhhuB3UQhjQKBnWBym/udR1Sax/LBTBPsR3QkecgZNVby/YiacRBmT7MBH1Rrbt2dg6wSryA1KNAcVL/8YczSCKVhgmrd89zE9DOqDGcCZxU/1ZhQNqEj7FkqaYS6n80zz8iZVYYkjJV90pC5+nsjo5HW0yiwk3lGvezl4n9eLzXhdT/jMkkNSrY4FKaCmJjkBZAhV8iMmFpCmeI2K2FjqigztqaKLcFb/vIqaV/UPbfu3V/WGjdFHWU4gVM4Bw+uoAF30IQWMEjgGV7hzUmdF+fd+ViMlpxi5xj+wPn8AYJOkfo=</latexit> <latexit sha1_base64=\"UY6E2icMCokwsGMbJyUh3kjb+CQ=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LLoxmUF+4CmlMn0ph06mYSZiVBDf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSATXxnW/ndLa+sbmVnm7srO7t39QPTxq6zhVDFssFrHqBlSj4BJbhhuB3UQhjQKBnWBym/udR1Sax/LBTBPsR3QkecgZNVby/YiacRBmT7MBH1Rrbt2dg6wSryA1KNAcVL/8YczSCKVhgmrd89zE9DOqDGcCZxU/1ZhQNqEj7FkqaYS6n80zz8iZVYYkjJV90pC5+nsjo5HW0yiwk3lGvezl4n9eLzXhdT/jMkkNSrY4FKaCmJjkBZAhV8iMmFpCmeI2K2FjqigztqaKLcFb/vIqaV/UPbfu3V/WGjdFHWU4gVM4Bw+uoAF30IQWMEjgGV7hzUmdF+fd+ViMlpxi5xj+wPn8AYJOkfo=</latexit> <latexit sha1_base64=\"UY6E2icMCokwsGMbJyUh3kjb+CQ=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LLoxmUF+4CmlMn0ph06mYSZiVBDf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSATXxnW/ndLa+sbmVnm7srO7t39QPTxq6zhVDFssFrHqBlSj4BJbhhuB3UQhjQKBnWBym/udR1Sax/LBTBPsR3QkecgZNVby/YiacRBmT7MBH1Rrbt2dg6wSryA1KNAcVL/8YczSCKVhgmrd89zE9DOqDGcCZxU/1ZhQNqEj7FkqaYS6n80zz8iZVYYkjJV90pC5+nsjo5HW0yiwk3lGvezl4n9eLzXhdT/jMkkNSrY4FKaCmJjkBZAhV8iMmFpCmeI2K2FjqigztqaKLcFb/vIqaV/UPbfu3V/WGjdFHWU4gVM4Bw+uoAF30IQWMEjgGV7hzUmdF+fd+ViMlpxi5xj+wPn8AYJOkfo=</latexit> D i <latexit sha1_base64=\"5w/6IEWDIi4/qwt55F3qiuPWP2k=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LKoC5cV7AOaUCbTSTt0MgkzN0IJ/Q03LhRx68+482+ctllo64GBwzn3cs+cMJXCoOt+O6W19Y3NrfJ2ZWd3b/+genjUNkmmGW+xRCa6G1LDpVC8hQIl76aa0ziUvBOOb2d+54lrIxL1iJOUBzEdKhEJRtFKvh9THIVRfjfti3615tbdOcgq8QpSgwLNfvXLHyQsi7lCJqkxPc9NMcipRsEkn1b8zPCUsjEd8p6lisbcBPk885ScWWVAokTbp5DM1d8bOY2NmcShnZxlNMveTPzP62UYXQe5UGmGXLHFoSiTBBMyK4AMhOYM5cQSyrSwWQkbUU0Z2poqtgRv+curpH1R99y693BZa9wUdZThBE7hHDy4ggbcQxNawCCFZ3iFNydzXpx352MxWnKKnWP4A+fzBy/UkcQ=</latexit> <latexit sha1_base64=\"5w/6IEWDIi4/qwt55F3qiuPWP2k=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LKoC5cV7AOaUCbTSTt0MgkzN0IJ/Q03LhRx68+482+ctllo64GBwzn3cs+cMJXCoOt+O6W19Y3NrfJ2ZWd3b/+genjUNkmmGW+xRCa6G1LDpVC8hQIl76aa0ziUvBOOb2d+54lrIxL1iJOUBzEdKhEJRtFKvh9THIVRfjfti3615tbdOcgq8QpSgwLNfvXLHyQsi7lCJqkxPc9NMcipRsEkn1b8zPCUsjEd8p6lisbcBPk885ScWWVAokTbp5DM1d8bOY2NmcShnZxlNMveTPzP62UYXQe5UGmGXLHFoSiTBBMyK4AMhOYM5cQSyrSwWQkbUU0Z2poqtgRv+curpH1R99y693BZa9wUdZThBE7hHDy4ggbcQxNawCCFZ3iFNydzXpx352MxWnKKnWP4A+fzBy/UkcQ=</latexit> <latexit sha1_base64=\"5w/6IEWDIi4/qwt55F3qiuPWP2k=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LKoC5cV7AOaUCbTSTt0MgkzN0IJ/Q03LhRx68+482+ctllo64GBwzn3cs+cMJXCoOt+O6W19Y3NrfJ2ZWd3b/+genjUNkmmGW+xRCa6G1LDpVC8hQIl76aa0ziUvBOOb2d+54lrIxL1iJOUBzEdKhEJRtFKvh9THIVRfjfti3615tbdOcgq8QpSgwLNfvXLHyQsi7lCJqkxPc9NMcipRsEkn1b8zPCUsjEd8p6lisbcBPk885ScWWVAokTbp5DM1d8bOY2NmcShnZxlNMveTPzP62UYXQe5UGmGXLHFoSiTBBMyK4AMhOYM5cQSyrSwWQkbUU0Z2poqtgRv+curpH1R99y693BZa9wUdZThBE7hHDy4ggbcQxNawCCFZ3iFNydzXpx352MxWnKKnWP4A+fzBy/UkcQ=</latexit> <latexit sha1_base64=\"5w/6IEWDIi4/qwt55F3qiuPWP2k=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LKoC5cV7AOaUCbTSTt0MgkzN0IJ/Q03LhRx68+482+ctllo64GBwzn3cs+cMJXCoOt+O6W19Y3NrfJ2ZWd3b/+genjUNkmmGW+xRCa6G1LDpVC8hQIl76aa0ziUvBOOb2d+54lrIxL1iJOUBzEdKhEJRtFKvh9THIVRfjfti3615tbdOcgq8QpSgwLNfvXLHyQsi7lCJqkxPc9NMcipRsEkn1b8zPCUsjEd8p6lisbcBPk885ScWWVAokTbp5DM1d8bOY2NmcShnZxlNMveTPzP62UYXQe5UGmGXLHFoSiTBBMyK4AMhOYM5cQSyrSwWQkbUU0Z2poqtgRv+curpH1R99y693BZa9wUdZThBE7hHDy4ggbcQxNawCCFZ3iFNydzXpx352MxWnKKnWP4A+fzBy/UkcQ=</latexit> Bi-LSTM a i <latexit sha1_base64=\"bNcjTJ8RJ8odO/aEL4ZsKs/zIq0=\">AAAB83icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gOaUDbbTbt0swm7E7GE/g0vHhTx6p/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeTaiEQ94CTlQUyHSkSCUbSS7yN/wjDK6bQv+tWaW3fnIKvEK0gNCjT71S9/kLAs5gqZpMb0PDfFIKcaBZN8WvEzw1PKxnTIe5YqGnMT5PObp+TMKgMSJdqWQjJXf0/kNDZmEoe2M6Y4MsveTPzP62UYXQe5UGmGXLHFoiiTBBMyC4AMhOYM5cQSyrSwtxI2opoytDFVbAje8surpH1R99y6d39Za9wUcZThBE7hHDy4ggbcQRNawCCFZ3iFNydzXpx352PRWnKKmWP4A+fzB4Xpkfw=</latexit> <latexit sha1_base64=\"bNcjTJ8RJ8odO/aEL4ZsKs/zIq0=\">AAAB83icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gOaUDbbTbt0swm7E7GE/g0vHhTx6p/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeTaiEQ94CTlQUyHSkSCUbSS7yN/wjDK6bQv+tWaW3fnIKvEK0gNCjT71S9/kLAs5gqZpMb0PDfFIKcaBZN8WvEzw1PKxnTIe5YqGnMT5PObp+TMKgMSJdqWQjJXf0/kNDZmEoe2M6Y4MsveTPzP62UYXQe5UGmGXLHFoiiTBBMyC4AMhOYM5cQSyrSwtxI2opoytDFVbAje8surpH1R99y6d39Za9wUcZThBE7hHDy4ggbcQRNawCCFZ3iFNydzXpx352PRWnKKmWP4A+fzB4Xpkfw=</latexit> <latexit sha1_base64=\"bNcjTJ8RJ8odO/aEL4ZsKs/zIq0=\">AAAB83icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gOaUDbbTbt0swm7E7GE/g0vHhTx6p/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeTaiEQ94CTlQUyHSkSCUbSS7yN/wjDK6bQv+tWaW3fnIKvEK0gNCjT71S9/kLAs5gqZpMb0PDfFIKcaBZN8WvEzw1PKxnTIe5YqGnMT5PObp+TMKgMSJdqWQjJXf0/kNDZmEoe2M6Y4MsveTPzP62UYXQe5UGmGXLHFoiiTBBMyC4AMhOYM5cQSyrSwtxI2opoytDFVbAje8surpH1R99y6d39Za9wUcZThBE7hHDy4ggbcQRNawCCFZ3iFNydzXpx352PRWnKKmWP4A+fzB4Xpkfw=</latexit> <latexit sha1_base64=\"bNcjTJ8RJ8odO/aEL4ZsKs/zIq0=\">AAAB83icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gOaUDbbTbt0swm7E7GE/g0vHhTx6p/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2z+oHh61TZJpxlsskYnuhtRwKRRvoUDJu6nmNA4l74Tj25nfeeTaiEQ94CTlQUyHSkSCUbSS7yN/wjDK6bQv+tWaW3fnIKvEK0gNCjT71S9/kLAs5gqZpMb0PDfFIKcaBZN8WvEzw1PKxnTIe5YqGnMT5PObp+TMKgMSJdqWQjJXf0/kNDZmEoe2M6Y4MsveTPzP62UYXQe5UGmGXLHFoiiTBBMyC4AMhOYM5cQSyrSwtxI2opoytDFVbAje8surpH1R99y6d39Za9wUcZThBE7hHDy4ggbcQRNawCCFZ3iFNydzXpx352PRWnKKmWP4A+fzB4Xpkfw=</latexit> X <latexit sha1_base64=\"8PPxFCMbrXOMLn0wIrNgh8zGV+g=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsyIUJcFEVxWsA/ojCWTZtrQJDMkGaUM/Q83LhRx67+482/MtLPQ1gOBwzn3ck9OmHCmjet+O6W19Y3NrfJ2ZWd3b/+genjU0XGqCG2TmMeqF2JNOZO0bZjhtJcoikXIaTecXOd+95EqzWJ5b6YJDQQeSRYxgo2VHnyBzTiMMl+nYlYZVGtu3Z0DrRKvIDUo0BpUv/xhTFJBpSEca9333MQEGVaGEU5nFT/VNMFkgke0b6nEguogm6eeoTOrDFEUK/ukQXP190aGhdZTEdrJPKVe9nLxP6+fmugqyJhMUkMlWRyKUo5MjPIK0JApSgyfWoKJYjYrImOsMDG2qLwEb/nLq6RzUffcund3WWveFHWU4QRO4Rw8aEATbqEFbSCg4Ble4c15cl6cd+djMVpyip1j+APn8weOt5KK</latexit> <latexit sha1_base64=\"8PPxFCMbrXOMLn0wIrNgh8zGV+g=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsyIUJcFEVxWsA/ojCWTZtrQJDMkGaUM/Q83LhRx67+482/MtLPQ1gOBwzn3ck9OmHCmjet+O6W19Y3NrfJ2ZWd3b/+genjU0XGqCG2TmMeqF2JNOZO0bZjhtJcoikXIaTecXOd+95EqzWJ5b6YJDQQeSRYxgo2VHnyBzTiMMl+nYlYZVGtu3Z0DrRKvIDUo0BpUv/xhTFJBpSEca9333MQEGVaGEU5nFT/VNMFkgke0b6nEguogm6eeoTOrDFEUK/ukQXP190aGhdZTEdrJPKVe9nLxP6+fmugqyJhMUkMlWRyKUo5MjPIK0JApSgyfWoKJYjYrImOsMDG2qLwEb/nLq6RzUffcund3WWveFHWU4QRO4Rw8aEATbqEFbSCg4Ble4c15cl6cd+djMVpyip1j+APn8weOt5KK</latexit> <latexit sha1_base64=\"8PPxFCMbrXOMLn0wIrNgh8zGV+g=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsyIUJcFEVxWsA/ojCWTZtrQJDMkGaUM/Q83LhRx67+482/MtLPQ1gOBwzn3ck9OmHCmjet+O6W19Y3NrfJ2ZWd3b/+genjU0XGqCG2TmMeqF2JNOZO0bZjhtJcoikXIaTecXOd+95EqzWJ5b6YJDQQeSRYxgo2VHnyBzTiMMl+nYlYZVGtu3Z0DrRKvIDUo0BpUv/xhTFJBpSEca9333MQEGVaGEU5nFT/VNMFkgke0b6nEguogm6eeoTOrDFEUK/ukQXP190aGhdZTEdrJPKVe9nLxP6+fmugqyJhMUkMlWRyKUo5MjPIK0JApSgyfWoKJYjYrImOsMDG2qLwEb/nLq6RzUffcund3WWveFHWU4QRO4Rw8aEATbqEFbSCg4Ble4c15cl6cd+djMVpyip1j+APn8weOt5KK</latexit> <latexit sha1_base64=\"8PPxFCMbrXOMLn0wIrNgh8zGV+g=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsyIUJcFEVxWsA/ojCWTZtrQJDMkGaUM/Q83LhRx67+482/MtLPQ1gOBwzn3ck9OmHCmjet+O6W19Y3NrfJ2ZWd3b/+genjU0XGqCG2TmMeqF2JNOZO0bZjhtJcoikXIaTecXOd+95EqzWJ5b6YJDQQeSRYxgo2VHnyBzTiMMl+nYlYZVGtu3Z0DrRKvIDUo0BpUv/xhTFJBpSEca9333MQEGVaGEU5nFT/VNMFkgke0b6nEguogm6eeoTOrDFEUK/ukQXP190aGhdZTEdrJPKVe9nLxP6+fmugqyJhMUkMlWRyKUo5MjPIK0JApSgyfWoKJYjYrImOsMDG2qLwEb/nLq6RzUffcund3WWveFHWU4QRO4Rw8aEATbqEFbSCg4Ble4c15cl6cd+djMVpyip1j+APn8weOt5KK</latexit> L i n ea r Decoder Classifier unlabeled w i, 1 <latexit sha1_base64=\"yvhBgp8D1qpYQpVF/ZVPY2LV/F4=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgQkoigi6LblxWsA9oQ5hMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zgoQzpR3n26qsrW9sblW3azu7e/sH9uFRR8WpJLRNYh7LXoAV5UzQtmaa014iKY4CTrvB5K7wu1MqFYvFo54l1IvwSLCQEayN5Nv2IMJ6HITZU+5n7MLNfbvuNJw50CpxS1KHEi3f/hoMY5JGVGjCsVJ910m0l2GpGeE0rw1SRRNMJnhE+4YKHFHlZfPkOTozyhCFsTRPaDRXf29kOFJqFgVmssiplr1C/M/rpzq88TImklRTQRaHwpQjHaOiBjRkkhLNZ4ZgIpnJisgYS0y0KatmSnCXv7xKOpcN12m4D1f15m1ZRxVO4BTOwYVraMI9tKANBKbwDK/wZmXWi/VufSxGK1a5cwx/YH3+AKSik6U=</latexit> <latexit sha1_base64=\"yvhBgp8D1qpYQpVF/ZVPY2LV/F4=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgQkoigi6LblxWsA9oQ5hMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zgoQzpR3n26qsrW9sblW3azu7e/sH9uFRR8WpJLRNYh7LXoAV5UzQtmaa014iKY4CTrvB5K7wu1MqFYvFo54l1IvwSLCQEayN5Nv2IMJ6HITZU+5n7MLNfbvuNJw50CpxS1KHEi3f/hoMY5JGVGjCsVJ910m0l2GpGeE0rw1SRRNMJnhE+4YKHFHlZfPkOTozyhCFsTRPaDRXf29kOFJqFgVmssiplr1C/M/rpzq88TImklRTQRaHwpQjHaOiBjRkkhLNZ4ZgIpnJisgYS0y0KatmSnCXv7xKOpcN12m4D1f15m1ZRxVO4BTOwYVraMI9tKANBKbwDK/wZmXWi/VufSxGK1a5cwx/YH3+AKSik6U=</latexit> <latexit sha1_base64=\"yvhBgp8D1qpYQpVF/ZVPY2LV/F4=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgQkoigi6LblxWsA9oQ5hMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zgoQzpR3n26qsrW9sblW3azu7e/sH9uFRR8WpJLRNYh7LXoAV5UzQtmaa014iKY4CTrvB5K7wu1MqFYvFo54l1IvwSLCQEayN5Nv2IMJ6HITZU+5n7MLNfbvuNJw50CpxS1KHEi3f/hoMY5JGVGjCsVJ910m0l2GpGeE0rw1SRRNMJnhE+4YKHFHlZfPkOTozyhCFsTRPaDRXf29kOFJqFgVmssiplr1C/M/rpzq88TImklRTQRaHwpQjHaOiBjRkkhLNZ4ZgIpnJisgYS0y0KatmSnCXv7xKOpcN12m4D1f15m1ZRxVO4BTOwYVraMI9tKANBKbwDK/wZmXWi/VufSxGK1a5cwx/YH3+AKSik6U=</latexit> <latexit sha1_base64=\"yvhBgp8D1qpYQpVF/ZVPY2LV/F4=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgQkoigi6LblxWsA9oQ5hMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zgoQzpR3n26qsrW9sblW3azu7e/sH9uFRR8WpJLRNYh7LXoAV5UzQtmaa014iKY4CTrvB5K7wu1MqFYvFo54l1IvwSLCQEayN5Nv2IMJ6HITZU+5n7MLNfbvuNJw50CpxS1KHEi3f/hoMY5JGVGjCsVJ910m0l2GpGeE0rw1SRRNMJnhE+4YKHFHlZfPkOTozyhCFsTRPaDRXf29kOFJqFgVmssiplr1C/M/rpzq88TImklRTQRaHwpQjHaOiBjRkkhLNZ4ZgIpnJisgYS0y0KatmSnCXv7xKOpcN12m4D1f15m1ZRxVO4BTOwYVraMI9tKANBKbwDK/wZmXWi/VufSxGK1a5cwx/YH3+AKSik6U=</latexit> <latexit sha1_base64=\"3J3YM5kQNFhF+Fz4K7ACLuQz+sI=\">AAAB7nicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eKxgP6ANZbPZtEs3u2F3IpTQH+HFgyJe/T3e/Ddu2hy09cHA470ZZuaFqeAGPe/bWVvf2NzaruxUd/f2Dw5rR8cdozJNWZsqoXQvJIYJLlkbOQrWSzUjSShYN5zcFX73iWnDlXzEacqChIwkjzklaKXugEYKTXVYq3sNbw53lfglqUOJ1rD2NYgUzRImkQpiTN/3UgxyopFTwWbVQWZYSuiEjFjfUkkSZoJ8fu7MPbdK5MZK25LoztXfEzlJjJkmoe1MCI7NsleI/3n9DOObIOcyzZBJulgUZ8JF5Ra/uxHXjKKYWkKo5vZWl46JJhRtQkUI/vLLq6Rz2fC9hv9wVW/elnFU4BTO4AJ8uIYm3EML2kBhAs/wCm9O6rw4787HonXNKWdO4A+czx/k5o9D</latexit> <latexit sha1_base64=\"3J3YM5kQNFhF+Fz4K7ACLuQz+sI=\">AAAB7nicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eKxgP6ANZbPZtEs3u2F3IpTQH+HFgyJe/T3e/Ddu2hy09cHA470ZZuaFqeAGPe/bWVvf2NzaruxUd/f2Dw5rR8cdozJNWZsqoXQvJIYJLlkbOQrWSzUjSShYN5zcFX73iWnDlXzEacqChIwkjzklaKXugEYKTXVYq3sNbw53lfglqUOJ1rD2NYgUzRImkQpiTN/3UgxyopFTwWbVQWZYSuiEjFjfUkkSZoJ8fu7MPbdK5MZK25LoztXfEzlJjJkmoe1MCI7NsleI/3n9DOObIOcyzZBJulgUZ8JF5Ra/uxHXjKKYWkKo5vZWl46JJhRtQkUI/vLLq6Rz2fC9hv9wVW/elnFU4BTO4AJ8uIYm3EML2kBhAs/wCm9O6rw4787HonXNKWdO4A+czx/k5o9D</latexit> <latexit sha1_base64=\"3J3YM5kQNFhF+Fz4K7ACLuQz+sI=\">AAAB7nicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eKxgP6ANZbPZtEs3u2F3IpTQH+HFgyJe/T3e/Ddu2hy09cHA470ZZuaFqeAGPe/bWVvf2NzaruxUd/f2Dw5rR8cdozJNWZsqoXQvJIYJLlkbOQrWSzUjSShYN5zcFX73iWnDlXzEacqChIwkjzklaKXugEYKTXVYq3sNbw53lfglqUOJ1rD2NYgUzRImkQpiTN/3UgxyopFTwWbVQWZYSuiEjFjfUkkSZoJ8fu7MPbdK5MZK25LoztXfEzlJjJkmoe1MCI7NsleI/3n9DOObIOcyzZBJulgUZ8JF5Ra/uxHXjKKYWkKo5vZWl46JJhRtQkUI/vLLq6Rz2fC9hv9wVW/elnFU4BTO4AJ8uIYm3EML2kBhAs/wCm9O6rw4787HonXNKWdO4A+czx/k5o9D</latexit> <latexit sha1_base64=\"3J3YM5kQNFhF+Fz4K7ACLuQz+sI=\">AAAB7nicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eKxgP6ANZbPZtEs3u2F3IpTQH+HFgyJe/T3e/Ddu2hy09cHA470ZZuaFqeAGPe/bWVvf2NzaruxUd/f2Dw5rR8cdozJNWZsqoXQvJIYJLlkbOQrWSzUjSShYN5zcFX73iWnDlXzEacqChIwkjzklaKXugEYKTXVYq3sNbw53lfglqUOJ1rD2NYgUzRImkQpiTN/3UgxyopFTwWbVQWZYSuiEjFjfUkkSZoJ8fu7MPbdK5MZK25LoztXfEzlJjJkmoe1MCI7NsleI/3n9DOObIOcyzZBJulgUZ8JF5Ra/uxHXjKKYWkKo5vZWl46JJhRtQkUI/vLLq6Rz2fC9hv9wVW/elnFU4BTO4AJ8uIYm3EML2kBhAs/wCm9O6rw4787HonXNKWdO4A+czx/k5o9D</latexit> labeled E i <latexit sha1_base64=\"VoyGd6VsT4TaaAxBSLt5efFcP4U=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LIogssK9gFNKJPppB06mYSZG6GE/oYbF4q49Wfc+TdO2yy09cDA4Zx7uWdOmEph0HW/ndLa+sbmVnm7srO7t39QPTxqmyTTjLdYIhPdDanhUijeQoGSd1PNaRxK3gnHtzO/88S1EYl6xEnKg5gOlYgEo2gl348pjsIov5v2Rb9ac+vuHGSVeAWpQYFmv/rlDxKWxVwhk9SYnuemGORUo2CSTyt+ZnhK2ZgOec9SRWNugnyeeUrOrDIgUaLtU0jm6u+NnMbGTOLQTs4ymmVvJv7n9TKMroNcqDRDrtjiUJRJggmZFUAGQnOGcmIJZVrYrISNqKYMbU0VW4K3/OVV0r6oe27de7isNW6KOspwAqdwDh5cQQPuoQktYJDCM7zCm5M5L86787EYLTnFzjH8gfP5AzFbkcU=</latexit> <latexit sha1_base64=\"VoyGd6VsT4TaaAxBSLt5efFcP4U=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LIogssK9gFNKJPppB06mYSZG6GE/oYbF4q49Wfc+TdO2yy09cDA4Zx7uWdOmEph0HW/ndLa+sbmVnm7srO7t39QPTxqmyTTjLdYIhPdDanhUijeQoGSd1PNaRxK3gnHtzO/88S1EYl6xEnKg5gOlYgEo2gl348pjsIov5v2Rb9ac+vuHGSVeAWpQYFmv/rlDxKWxVwhk9SYnuemGORUo2CSTyt+ZnhK2ZgOec9SRWNugnyeeUrOrDIgUaLtU0jm6u+NnMbGTOLQTs4ymmVvJv7n9TKMroNcqDRDrtjiUJRJggmZFUAGQnOGcmIJZVrYrISNqKYMbU0VW4K3/OVV0r6oe27de7isNW6KOspwAqdwDh5cQQPuoQktYJDCM7zCm5M5L86787EYLTnFzjH8gfP5AzFbkcU=</latexit> <latexit sha1_base64=\"VoyGd6VsT4TaaAxBSLt5efFcP4U=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LIogssK9gFNKJPppB06mYSZG6GE/oYbF4q49Wfc+TdO2yy09cDA4Zx7uWdOmEph0HW/ndLa+sbmVnm7srO7t39QPTxqmyTTjLdYIhPdDanhUijeQoGSd1PNaRxK3gnHtzO/88S1EYl6xEnKg5gOlYgEo2gl348pjsIov5v2Rb9ac+vuHGSVeAWpQYFmv/rlDxKWxVwhk9SYnuemGORUo2CSTyt+ZnhK2ZgOec9SRWNugnyeeUrOrDIgUaLtU0jm6u+NnMbGTOLQTs4ymmVvJv7n9TKMroNcqDRDrtjiUJRJggmZFUAGQnOGcmIJZVrYrISNqKYMbU0VW4K3/OVV0r6oe27de7isNW6KOspwAqdwDh5cQQPuoQktYJDCM7zCm5M5L86787EYLTnFzjH8gfP5AzFbkcU=</latexit> <latexit sha1_base64=\"VoyGd6VsT4TaaAxBSLt5efFcP4U=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi6LIogssK9gFNKJPppB06mYSZG6GE/oYbF4q49Wfc+TdO2yy09cDA4Zx7uWdOmEph0HW/ndLa+sbmVnm7srO7t39QPTxqmyTTjLdYIhPdDanhUijeQoGSd1PNaRxK3gnHtzO/88S1EYl6xEnKg5gOlYgEo2gl348pjsIov5v2Rb9ac+vuHGSVeAWpQYFmv/rlDxKWxVwhk9SYnuemGORUo2CSTyt+ZnhK2ZgOec9SRWNugnyeeUrOrDIgUaLtU0jm6u+NnMbGTOLQTs4ymmVvJv7n9TKMroNcqDRDrtjiUJRJggmZFUAGQnOGcmIJZVrYrISNqKYMbU0VW4K3/OVV0r6oe27de7isNW6KOspwAqdwDh5cQQPuoQktYJDCM7zCm5M5L86787EYLTnFzjH8gfP5AzFbkcU=</latexit> LSTM D i <latexit sha1_base64=\"kspifSvEaDCFEYWoCgMEI1M95YU=\">AAAB+3icbVBNS8NAEN3Ur1q/Yj16WSyCp5KIoMeiHjxWsLXQhLDZbtqlm03YnUhLyF/x4kERr/4Rb/4bt20O2vpg4PHeDDPzwlRwDY7zbVXW1jc2t6rbtZ3dvf0D+7De1UmmKOvQRCSqFxLNBJesAxwE66WKkTgU7DEc38z8xyemNE/kA0xT5sdkKHnEKQEjBXbdGxHIPWATCKP8tigCHtgNp+nMgVeJW5IGKtEO7C9vkNAsZhKoIFr3XScFPycKOBWsqHmZZimhYzJkfUMliZn28/ntBT41ygBHiTIlAc/V3xM5ibWexqHpjAmM9LI3E//z+hlEV37OZZoBk3SxKMoEhgTPgsADrhgFMTWEUMXNrZiOiCIUTFw1E4K7/PIq6Z43Xafp3l80WtdlHFV0jE7QGXLRJWqhO9RGHUTRBD2jV/RmFdaL9W59LForVjlzhP7A+vwBt7qU3Q==</latexit> <latexit sha1_base64=\"kspifSvEaDCFEYWoCgMEI1M95YU=\">AAAB+3icbVBNS8NAEN3Ur1q/Yj16WSyCp5KIoMeiHjxWsLXQhLDZbtqlm03YnUhLyF/x4kERr/4Rb/4bt20O2vpg4PHeDDPzwlRwDY7zbVXW1jc2t6rbtZ3dvf0D+7De1UmmKOvQRCSqFxLNBJesAxwE66WKkTgU7DEc38z8xyemNE/kA0xT5sdkKHnEKQEjBXbdGxHIPWATCKP8tigCHtgNp+nMgVeJW5IGKtEO7C9vkNAsZhKoIFr3XScFPycKOBWsqHmZZimhYzJkfUMliZn28/ntBT41ygBHiTIlAc/V3xM5ibWexqHpjAmM9LI3E//z+hlEV37OZZoBk3SxKMoEhgTPgsADrhgFMTWEUMXNrZiOiCIUTFw1E4K7/PIq6Z43Xafp3l80WtdlHFV0jE7QGXLRJWqhO9RGHUTRBD2jV/RmFdaL9W59LForVjlzhP7A+vwBt7qU3Q==</latexit> <latexit sha1_base64=\"kspifSvEaDCFEYWoCgMEI1M95YU=\">AAAB+3icbVBNS8NAEN3Ur1q/Yj16WSyCp5KIoMeiHjxWsLXQhLDZbtqlm03YnUhLyF/x4kERr/4Rb/4bt20O2vpg4PHeDDPzwlRwDY7zbVXW1jc2t6rbtZ3dvf0D+7De1UmmKOvQRCSqFxLNBJesAxwE66WKkTgU7DEc38z8xyemNE/kA0xT5sdkKHnEKQEjBXbdGxHIPWATCKP8tigCHtgNp+nMgVeJW5IGKtEO7C9vkNAsZhKoIFr3XScFPycKOBWsqHmZZimhYzJkfUMliZn28/ntBT41ygBHiTIlAc/V3xM5ibWexqHpjAmM9LI3E//z+hlEV37OZZoBk3SxKMoEhgTPgsADrhgFMTWEUMXNrZiOiCIUTFw1E4K7/PIq6Z43Xafp3l80WtdlHFV0jE7QGXLRJWqhO9RGHUTRBD2jV/RmFdaL9W59LForVjlzhP7A+vwBt7qU3Q==</latexit> <latexit sha1_base64=\"kspifSvEaDCFEYWoCgMEI1M95YU=\">AAAB+3icbVBNS8NAEN3Ur1q/Yj16WSyCp5KIoMeiHjxWsLXQhLDZbtqlm03YnUhLyF/x4kERr/4Rb/4bt20O2vpg4PHeDDPzwlRwDY7zbVXW1jc2t6rbtZ3dvf0D+7De1UmmKOvQRCSqFxLNBJesAxwE66WKkTgU7DEc38z8xyemNE/kA0xT5sdkKHnEKQEjBXbdGxHIPWATCKP8tigCHtgNp+nMgVeJW5IGKtEO7C9vkNAsZhKoIFr3XScFPycKOBWsqHmZZimhYzJkfUMliZn28/ntBT41ygBHiTIlAc/V3xM5ibWexqHpjAmM9LI3E//z+hlEV37OZZoBk3SxKMoEhgTPgsADrhgFMTWEUMXNrZiOiCIUTFw1E4K7/PIq6Z43Xafp3l80WtdlHFV0jE7QGXLRJWqhO9RGHUTRBD2jV/RmFdaL9W59LForVjlzhP7A+vwBt7qU3Q==</latexit> <latexit sha1_base64=\"Gl+cK3tKw908kM8UcWCVA4nLACo=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHoxWMFUwttKJvNpl262Q27E6GE/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvygQ36HnfTmVtfWNzq7pd29nd2z+oHx51jMo1ZQFVQuluRAwTXLIAOQrWzTQjaSTYYzS+nfmPT0wbruQDTjIWpmQoecIpQSsFfRorHNQbXtObw10lfkkaUKI9qH/1Y0XzlEmkghjT870Mw4Jo5FSwaa2fG5YROiZD1rNUkpSZsJgfO3XPrBK7idK2JLpz9fdEQVJjJmlkO1OCI7PszcT/vF6OyXVYcJnlyCRdLEpy4aJyZ5+7MdeMophYQqjm9laXjogmFG0+NRuCv/zyKulcNH2v6d9fNlo3ZRxVOIFTOAcfrqAFd9CGAChweIZXeHOk8+K8Ox+L1opTzhzDHzifP9kKjrI=</latexit> <latexit sha1_base64=\"Gl+cK3tKw908kM8UcWCVA4nLACo=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHoxWMFUwttKJvNpl262Q27E6GE/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvygQ36HnfTmVtfWNzq7pd29nd2z+oHx51jMo1ZQFVQuluRAwTXLIAOQrWzTQjaSTYYzS+nfmPT0wbruQDTjIWpmQoecIpQSsFfRorHNQbXtObw10lfkkaUKI9qH/1Y0XzlEmkghjT870Mw4Jo5FSwaa2fG5YROiZD1rNUkpSZsJgfO3XPrBK7idK2JLpz9fdEQVJjJmlkO1OCI7PszcT/vF6OyXVYcJnlyCRdLEpy4aJyZ5+7MdeMophYQqjm9laXjogmFG0+NRuCv/zyKulcNH2v6d9fNlo3ZRxVOIFTOAcfrqAFd9CGAChweIZXeHOk8+K8Ox+L1opTzhzDHzifP9kKjrI=</latexit> <latexit sha1_base64=\"Gl+cK3tKw908kM8UcWCVA4nLACo=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHoxWMFUwttKJvNpl262Q27E6GE/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvygQ36HnfTmVtfWNzq7pd29nd2z+oHx51jMo1ZQFVQuluRAwTXLIAOQrWzTQjaSTYYzS+nfmPT0wbruQDTjIWpmQoecIpQSsFfRorHNQbXtObw10lfkkaUKI9qH/1Y0XzlEmkghjT870Mw4Jo5FSwaa2fG5YROiZD1rNUkpSZsJgfO3XPrBK7idK2JLpz9fdEQVJjJmlkO1OCI7PszcT/vF6OyXVYcJnlyCRdLEpy4aJyZ5+7MdeMophYQqjm9laXjogmFG0+NRuCv/zyKulcNH2v6d9fNlo3ZRxVOIFTOAcfrqAFd9CGAChweIZXeHOk8+K8Ox+L1opTzhzDHzifP9kKjrI=</latexit> <latexit sha1_base64=\"Gl+cK3tKw908kM8UcWCVA4nLACo=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHoxWMFUwttKJvNpl262Q27E6GE/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvygQ36HnfTmVtfWNzq7pd29nd2z+oHx51jMo1ZQFVQuluRAwTXLIAOQrWzTQjaSTYYzS+nfmPT0wbruQDTjIWpmQoecIpQSsFfRorHNQbXtObw10lfkkaUKI9qH/1Y0XzlEmkghjT870Mw4Jo5FSwaa2fG5YROiZD1rNUkpSZsJgfO3XPrBK7idK2JLpz9fdEQVJjJmlkO1OCI7PszcT/vF6OyXVYcJnlyCRdLEpy4aJyZ5+7MdeMophYQqjm9laXjogmFG0+NRuCv/zyKulcNH2v6d9fNlo3ZRxVOIFTOAcfrqAFd9CGAChweIZXeHOk8+K8Ox+L1opTzhzDHzifP9kKjrI=</latexit> <latexit sha1_base64=\"Gl+cK3tKw908kM8UcWCVA4nLACo=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHoxWMFUwttKJvNpl262Q27E6GE/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvygQ36HnfTmVtfWNzq7pd29nd2z+oHx51jMo1ZQFVQuluRAwTXLIAOQrWzTQjaSTYYzS+nfmPT0wbruQDTjIWpmQoecIpQSsFfRorHNQbXtObw10lfkkaUKI9qH/1Y0XzlEmkghjT870Mw4Jo5FSwaa2fG5YROiZD1rNUkpSZsJgfO3XPrBK7idK2JLpz9fdEQVJjJmlkO1OCI7PszcT/vF6OyXVYcJnlyCRdLEpy4aJyZ5+7MdeMophYQqjm9laXjogmFG0+NRuCv/zyKulcNH2v6d9fNlo3ZRxVOIFTOAcfrqAFd9CGAChweIZXeHOk8+K8Ox+L1opTzhzDHzifP9kKjrI=</latexit> <latexit sha1_base64=\"Gl+cK3tKw908kM8UcWCVA4nLACo=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHoxWMFUwttKJvNpl262Q27E6GE/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvygQ36HnfTmVtfWNzq7pd29nd2z+oHx51jMo1ZQFVQuluRAwTXLIAOQrWzTQjaSTYYzS+nfmPT0wbruQDTjIWpmQoecIpQSsFfRorHNQbXtObw10lfkkaUKI9qH/1Y0XzlEmkghjT870Mw4Jo5FSwaa2fG5YROiZD1rNUkpSZsJgfO3XPrBK7idK2JLpz9fdEQVJjJmlkO1OCI7PszcT/vF6OyXVYcJnlyCRdLEpy4aJyZ5+7MdeMophYQqjm9laXjogmFG0+NRuCv/zyKulcNH2v6d9fNlo3ZRxVOIFTOAcfrqAFd9CGAChweIZXeHOk8+K8Ox+L1opTzhzDHzifP9kKjrI=</latexit> <latexit sha1_base64=\"Gl+cK3tKw908kM8UcWCVA4nLACo=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHoxWMFUwttKJvNpl262Q27E6GE/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvygQ36HnfTmVtfWNzq7pd29nd2z+oHx51jMo1ZQFVQuluRAwTXLIAOQrWzTQjaSTYYzS+nfmPT0wbruQDTjIWpmQoecIpQSsFfRorHNQbXtObw10lfkkaUKI9qH/1Y0XzlEmkghjT870Mw4Jo5FSwaa2fG5YROiZD1rNUkpSZsJgfO3XPrBK7idK2JLpz9fdEQVJjJmlkO1OCI7PszcT/vF6OyXVYcJnlyCRdLEpy4aJyZ5+7MdeMophYQqjm9laXjogmFG0+NRuCv/zyKulcNH2v6d9fNlo3ZRxVOIFTOAcfrqAFd9CGAChweIZXeHOk8+K8Ox+L1opTzhzDHzifP9kKjrI=</latexit> <latexit sha1_base64=\"Gl+cK3tKw908kM8UcWCVA4nLACo=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHoxWMFUwttKJvNpl262Q27E6GE/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvygQ36HnfTmVtfWNzq7pd29nd2z+oHx51jMo1ZQFVQuluRAwTXLIAOQrWzTQjaSTYYzS+nfmPT0wbruQDTjIWpmQoecIpQSsFfRorHNQbXtObw10lfkkaUKI9qH/1Y0XzlEmkghjT870Mw4Jo5FSwaa2fG5YROiZD1rNUkpSZsJgfO3XPrBK7idK2JLpz9fdEQVJjJmlkO1OCI7PszcT/vF6OyXVYcJnlyCRdLEpy4aJyZ5+7MdeMophYQqjm9laXjogmFG0+NRuCv/zyKulcNH2v6d9fNlo3ZRxVOIFTOAcfrqAFd9CGAChweIZXeHOk8+K8Ox+L1opTzhzDHzifP9kKjrI=</latexit> w i,K <latexit sha1_base64=\"lKg/tOz6P0oSx/GWixo/Qsj8UJE=\">AAAB+XicbVDLSsNAFL3xWesr6tLNYBFcSElE0GXRjeCmgn1AG8JkOmmHTiZhZlIpIX/ixoUibv0Td/6NkzYLbT0wcDjnXu6ZEyScKe0439bK6tr6xmZlq7q9s7u3bx8ctlWcSkJbJOax7AZYUc4EbWmmOe0mkuIo4LQTjG8LvzOhUrFYPOppQr0IDwULGcHaSL5t9yOsR0GYPeV+xs7vc9+uOXVnBrRM3JLUoETTt7/6g5ikERWacKxUz3US7WVYakY4zav9VNEEkzEe0p6hAkdUedkseY5OjTJAYSzNExrN1N8bGY6UmkaBmSxyqkWvEP/zeqkOr72MiSTVVJD5oTDlSMeoqAENmKRE86khmEhmsiIywhITbcqqmhLcxS8vk/ZF3XXq7sNlrXFT1lGBYziBM3DhChpwB01oAYEJPMMrvFmZ9WK9Wx/z0RWr3DmCP7A+fwDMJJO/</latexit> <latexit sha1_base64=\"lKg/tOz6P0oSx/GWixo/Qsj8UJE=\">AAAB+XicbVDLSsNAFL3xWesr6tLNYBFcSElE0GXRjeCmgn1AG8JkOmmHTiZhZlIpIX/ixoUibv0Td/6NkzYLbT0wcDjnXu6ZEyScKe0439bK6tr6xmZlq7q9s7u3bx8ctlWcSkJbJOax7AZYUc4EbWmmOe0mkuIo4LQTjG8LvzOhUrFYPOppQr0IDwULGcHaSL5t9yOsR0GYPeV+xs7vc9+uOXVnBrRM3JLUoETTt7/6g5ikERWacKxUz3US7WVYakY4zav9VNEEkzEe0p6hAkdUedkseY5OjTJAYSzNExrN1N8bGY6UmkaBmSxyqkWvEP/zeqkOr72MiSTVVJD5oTDlSMeoqAENmKRE86khmEhmsiIywhITbcqqmhLcxS8vk/ZF3XXq7sNlrXFT1lGBYziBM3DhChpwB01oAYEJPMMrvFmZ9WK9Wx/z0RWr3DmCP7A+fwDMJJO/</latexit> <latexit sha1_base64=\"lKg/tOz6P0oSx/GWixo/Qsj8UJE=\">AAAB+XicbVDLSsNAFL3xWesr6tLNYBFcSElE0GXRjeCmgn1AG8JkOmmHTiZhZlIpIX/ixoUibv0Td/6NkzYLbT0wcDjnXu6ZEyScKe0439bK6tr6xmZlq7q9s7u3bx8ctlWcSkJbJOax7AZYUc4EbWmmOe0mkuIo4LQTjG8LvzOhUrFYPOppQr0IDwULGcHaSL5t9yOsR0GYPeV+xs7vc9+uOXVnBrRM3JLUoETTt7/6g5ikERWacKxUz3US7WVYakY4zav9VNEEkzEe0p6hAkdUedkseY5OjTJAYSzNExrN1N8bGY6UmkaBmSxyqkWvEP/zeqkOr72MiSTVVJD5oTDlSMeoqAENmKRE86khmEhmsiIywhITbcqqmhLcxS8vk/ZF3XXq7sNlrXFT1lGBYziBM3DhChpwB01oAYEJPMMrvFmZ9WK9Wx/z0RWr3DmCP7A+fwDMJJO/</latexit> <latexit sha1_base64=\"lKg/tOz6P0oSx/GWixo/Qsj8UJE=\">AAAB+XicbVDLSsNAFL3xWesr6tLNYBFcSElE0GXRjeCmgn1AG8JkOmmHTiZhZlIpIX/ixoUibv0Td/6NkzYLbT0wcDjnXu6ZEyScKe0439bK6tr6xmZlq7q9s7u3bx8ctlWcSkJbJOax7AZYUc4EbWmmOe0mkuIo4LQTjG8LvzOhUrFYPOppQr0IDwULGcHaSL5t9yOsR0GYPeV+xs7vc9+uOXVnBrRM3JLUoETTt7/6g5ikERWacKxUz3US7WVYakY4zav9VNEEkzEe0p6hAkdUedkseY5OjTJAYSzNExrN1N8bGY6UmkaBmSxyqkWvEP/zeqkOr72MiSTVVJD5oTDlSMeoqAENmKRE86khmEhmsiIywhITbcqqmhLcxS8vk/ZF3XXq7sNlrXFT1lGBYziBM3DhChpwB01oAYEJPMMrvFmZ9WK9Wx/z0RWr3DmCP7A+fwDMJJO/</latexit> Bi-LSTM Bi-LSTM input E i <latexit sha1_base64=\"3Bm26c0AzQsaOwy0qRUWebk0hNY=\">AAAB/XicbVDLSsNAFL2pr1pf8bFzM1gEVyURQZdFEVxWsLXQhDKZTtqhk0mYmQg1BH/FjQtF3Pof7vwbJ20W2npg4HDOvdwzJ0g4U9pxvq3K0vLK6lp1vbaxubW9Y+/udVScSkLbJOax7AZYUc4EbWumOe0mkuIo4PQ+GF8V/v0DlYrF4k5PEupHeChYyAjWRurbB94I68yLsB4FYXad5/2M5X277jScKdAicUtShxKtvv3lDWKSRlRowrFSPddJtJ9hqRnhNK95qaIJJmM8pD1DBY6o8rNp+hwdG2WAwliaJzSaqr83MhwpNYkCM1nEVPNeIf7n9VIdXvgZE0mqqSCzQ2HKkY5RUQUaMEmJ5hNDMJHMZEVkhCUm2hRWMyW4819eJJ3Thus03NuzevOyrKMKh3AEJ+DCOTThBlrQBgKP8Ayv8GY9WS/Wu/UxG61Y5c4+/IH1+QNg/JXP</latexit> <latexit sha1_base64=\"3Bm26c0AzQsaOwy0qRUWebk0hNY=\">AAAB/XicbVDLSsNAFL2pr1pf8bFzM1gEVyURQZdFEVxWsLXQhDKZTtqhk0mYmQg1BH/FjQtF3Pof7vwbJ20W2npg4HDOvdwzJ0g4U9pxvq3K0vLK6lp1vbaxubW9Y+/udVScSkLbJOax7AZYUc4EbWumOe0mkuIo4PQ+GF8V/v0DlYrF4k5PEupHeChYyAjWRurbB94I68yLsB4FYXad5/2M5X277jScKdAicUtShxKtvv3lDWKSRlRowrFSPddJtJ9hqRnhNK95qaIJJmM8pD1DBY6o8rNp+hwdG2WAwliaJzSaqr83MhwpNYkCM1nEVPNeIf7n9VIdXvgZE0mqqSCzQ2HKkY5RUQUaMEmJ5hNDMJHMZEVkhCUm2hRWMyW4819eJJ3Thus03NuzevOyrKMKh3AEJ+DCOTThBlrQBgKP8Ayv8GY9WS/Wu/UxG61Y5c4+/IH1+QNg/JXP</latexit> <latexit sha1_base64=\"3Bm26c0AzQsaOwy0qRUWebk0hNY=\">AAAB/XicbVDLSsNAFL2pr1pf8bFzM1gEVyURQZdFEVxWsLXQhDKZTtqhk0mYmQg1BH/FjQtF3Pof7vwbJ20W2npg4HDOvdwzJ0g4U9pxvq3K0vLK6lp1vbaxubW9Y+/udVScSkLbJOax7AZYUc4EbWumOe0mkuIo4PQ+GF8V/v0DlYrF4k5PEupHeChYyAjWRurbB94I68yLsB4FYXad5/2M5X277jScKdAicUtShxKtvv3lDWKSRlRowrFSPddJtJ9hqRnhNK95qaIJJmM8pD1DBY6o8rNp+hwdG2WAwliaJzSaqr83MhwpNYkCM1nEVPNeIf7n9VIdXvgZE0mqqSCzQ2HKkY5RUQUaMEmJ5hNDMJHMZEVkhCUm2hRWMyW4819eJJ3Thus03NuzevOyrKMKh3AEJ+DCOTThBlrQBgKP8Ayv8GY9WS/Wu/UxG61Y5c4+/IH1+QNg/JXP</latexit> <latexit sha1_base64=\"3Bm26c0AzQsaOwy0qRUWebk0hNY=\">AAAB/XicbVDLSsNAFL2pr1pf8bFzM1gEVyURQZdFEVxWsLXQhDKZTtqhk0mYmQg1BH/FjQtF3Pof7vwbJ20W2npg4HDOvdwzJ0g4U9pxvq3K0vLK6lp1vbaxubW9Y+/udVScSkLbJOax7AZYUc4EbWumOe0mkuIo4PQ+GF8V/v0DlYrF4k5PEupHeChYyAjWRurbB94I68yLsB4FYXad5/2M5X277jScKdAicUtShxKtvv3lDWKSRlRowrFSPddJtJ9hqRnhNK95qaIJJmM8pD1DBY6o8rNp+hwdG2WAwliaJzSaqr83MhwpNYkCM1nEVPNeIf7n9VIdXvgZE0mqqSCzQ2HKkY5RUQUaMEmJ5hNDMJHMZEVkhCUm2hRWMyW4819eJJ3Thus03NuzevOyrKMKh3AEJ+DCOTThBlrQBgKP8Ayv8GY9WS/Wu/UxG61Y5c4+/IH1+QNg/JXP</latexit> P r <latexit sha1_base64=\"8/qHSnsk0r20gg+5XM6uNQV7VGs=\">AAAB83icbVBNS8NAFHypX7V+VT16WSyCp5KIoMeiF48VbC00oWy2L+3SzSbsboQS+je8eFDEq3/Gm//GTZuDtg4sDDPv8WYnTAXXxnW/ncra+sbmVnW7trO7t39QPzzq6iRTDDssEYnqhVSj4BI7hhuBvVQhjUOBj+HktvAfn1BpnsgHM00xiOlI8ogzaqzk+zE14zDK27OBGtQbbtOdg6wSryQNKNEe1L/8YcKyGKVhgmrd99zUBDlVhjOBs5qfaUwpm9AR9i2VNEYd5PPMM3JmlSGJEmWfNGSu/t7Iaaz1NA7tZJFRL3uF+J/Xz0x0HeRcpplByRaHokwQk5CiADLkCpkRU0soU9xmJWxMFWXG1lSzJXjLX14l3Yum5za9+8tG66asowoncArn4MEVtOAO2tABBik8wyu8OZnz4rw7H4vRilPuHMMfOJ8/T8yR2Q==</latexit> <latexit sha1_base64=\"8/qHSnsk0r20gg+5XM6uNQV7VGs=\">AAAB83icbVBNS8NAFHypX7V+VT16WSyCp5KIoMeiF48VbC00oWy2L+3SzSbsboQS+je8eFDEq3/Gm//GTZuDtg4sDDPv8WYnTAXXxnW/ncra+sbmVnW7trO7t39QPzzq6iRTDDssEYnqhVSj4BI7hhuBvVQhjUOBj+HktvAfn1BpnsgHM00xiOlI8ogzaqzk+zE14zDK27OBGtQbbtOdg6wSryQNKNEe1L/8YcKyGKVhgmrd99zUBDlVhjOBs5qfaUwpm9AR9i2VNEYd5PPMM3JmlSGJEmWfNGSu/t7Iaaz1NA7tZJFRL3uF+J/Xz0x0HeRcpplByRaHokwQk5CiADLkCpkRU0soU9xmJWxMFWXG1lSzJXjLX14l3Yum5za9+8tG66asowoncArn4MEVtOAO2tABBik8wyu8OZnz4rw7H4vRilPuHMMfOJ8/T8yR2Q==</latexit> <latexit sha1_base64=\"8/qHSnsk0r20gg+5XM6uNQV7VGs=\">AAAB83icbVBNS8NAFHypX7V+VT16WSyCp5KIoMeiF48VbC00oWy2L+3SzSbsboQS+je8eFDEq3/Gm//GTZuDtg4sDDPv8WYnTAXXxnW/ncra+sbmVnW7trO7t39QPzzq6iRTDDssEYnqhVSj4BI7hhuBvVQhjUOBj+HktvAfn1BpnsgHM00xiOlI8ogzaqzk+zE14zDK27OBGtQbbtOdg6wSryQNKNEe1L/8YcKyGKVhgmrd99zUBDlVhjOBs5qfaUwpm9AR9i2VNEYd5PPMM3JmlSGJEmWfNGSu/t7Iaaz1NA7tZJFRL3uF+J/Xz0x0HeRcpplByRaHokwQk5CiADLkCpkRU0soU9xmJWxMFWXG1lSzJXjLX14l3Yum5za9+8tG66asowoncArn4MEVtOAO2tABBik8wyu8OZnz4rw7H4vRilPuHMMfOJ8/T8yR2Q==</latexit> <latexit sha1_base64=\"8/qHSnsk0r20gg+5XM6uNQV7VGs=\">AAAB83icbVBNS8NAFHypX7V+VT16WSyCp5KIoMeiF48VbC00oWy2L+3SzSbsboQS+je8eFDEq3/Gm//GTZuDtg4sDDPv8WYnTAXXxnW/ncra+sbmVnW7trO7t39QPzzq6iRTDDssEYnqhVSj4BI7hhuBvVQhjUOBj+HktvAfn1BpnsgHM00xiOlI8ogzaqzk+zE14zDK27OBGtQbbtOdg6wSryQNKNEe1L/8YcKyGKVhgmrd99zUBDlVhjOBs5qfaUwpm9AR9i2VNEYd5PPMM3JmlSGJEmWfNGSu/t7Iaaz1NA7tZJFRL3uF+J/Xz0x0HeRcpplByRaHokwQk5CiADLkCpkRU0soU9xmJWxMFWXG1lSzJXjLX14l3Yum5za9+8tG66asowoncArn4MEVtOAO2tABBik8wyu8OZnz4rw7H4vRilPuHMMfOJ8/T8yR2Q==</latexit> y i <latexit sha1_base64=\"FxGko2ZKNGP+ta9DvpPAbW6BZo0=\">AAAB+XicbVDLSsNAFL2pr1pfUZdugkVwVRIRdFl047KCfUAbwmQyaYdOZsLMpBBC/8SNC0Xc+ifu/BsnbRbaemCYwzn3MmdOmDKqtOt+W7WNza3tnfpuY2//4PDIPj7pKZFJTLpYMCEHIVKEUU66mmpGBqkkKAkZ6YfT+9Lvz4hUVPAnnafET9CY05hipI0U2PYoFCxSeWKuIp8HNLCbbstdwFknXkWaUKET2F+jSOAsIVxjhpQaem6q/QJJTTEj88YoUyRFeIrGZGgoRwlRfrFIPncujBI5sZDmcO0s1N8bBUpUGc5MJkhP1KpXiv95w0zHt35BeZppwvHyoThjjhZOWYMTUUmwZrkhCEtqsjp4giTC2pTVMCV4q19eJ72rlue2vMfrZvuuqqMOZ3AOl+DBDbThATrQBQwzeIZXeLMK68V6tz6WozWr2jmFP7A+fwBSSJQX</latexit> <latexit sha1_base64=\"FxGko2ZKNGP+ta9DvpPAbW6BZo0=\">AAAB+XicbVDLSsNAFL2pr1pfUZdugkVwVRIRdFl047KCfUAbwmQyaYdOZsLMpBBC/8SNC0Xc+ifu/BsnbRbaemCYwzn3MmdOmDKqtOt+W7WNza3tnfpuY2//4PDIPj7pKZFJTLpYMCEHIVKEUU66mmpGBqkkKAkZ6YfT+9Lvz4hUVPAnnafET9CY05hipI0U2PYoFCxSeWKuIp8HNLCbbstdwFknXkWaUKET2F+jSOAsIVxjhpQaem6q/QJJTTEj88YoUyRFeIrGZGgoRwlRfrFIPncujBI5sZDmcO0s1N8bBUpUGc5MJkhP1KpXiv95w0zHt35BeZppwvHyoThjjhZOWYMTUUmwZrkhCEtqsjp4giTC2pTVMCV4q19eJ72rlue2vMfrZvuuqqMOZ3AOl+DBDbThATrQBQwzeIZXeLMK68V6tz6WozWr2jmFP7A+fwBSSJQX</latexit> <latexit sha1_base64=\"FxGko2ZKNGP+ta9DvpPAbW6BZo0=\">AAAB+XicbVDLSsNAFL2pr1pfUZdugkVwVRIRdFl047KCfUAbwmQyaYdOZsLMpBBC/8SNC0Xc+ifu/BsnbRbaemCYwzn3MmdOmDKqtOt+W7WNza3tnfpuY2//4PDIPj7pKZFJTLpYMCEHIVKEUU66mmpGBqkkKAkZ6YfT+9Lvz4hUVPAnnafET9CY05hipI0U2PYoFCxSeWKuIp8HNLCbbstdwFknXkWaUKET2F+jSOAsIVxjhpQaem6q/QJJTTEj88YoUyRFeIrGZGgoRwlRfrFIPncujBI5sZDmcO0s1N8bBUpUGc5MJkhP1KpXiv95w0zHt35BeZppwvHyoThjjhZOWYMTUUmwZrkhCEtqsjp4giTC2pTVMCV4q19eJ72rlue2vMfrZvuuqqMOZ3AOl+DBDbThATrQBQwzeIZXeLMK68V6tz6WozWr2jmFP7A+fwBSSJQX</latexit> <latexit sha1_base64=\"FxGko2ZKNGP+ta9DvpPAbW6BZo0=\">AAAB+XicbVDLSsNAFL2pr1pfUZdugkVwVRIRdFl047KCfUAbwmQyaYdOZsLMpBBC/8SNC0Xc+ifu/BsnbRbaemCYwzn3MmdOmDKqtOt+W7WNza3tnfpuY2//4PDIPj7pKZFJTLpYMCEHIVKEUU66mmpGBqkkKAkZ6YfT+9Lvz4hUVPAnnafET9CY05hipI0U2PYoFCxSeWKuIp8HNLCbbstdwFknXkWaUKET2F+jSOAsIVxjhpQaem6q/QJJTTEj88YoUyRFeIrGZGgoRwlRfrFIPncujBI5sZDmcO0s1N8bBUpUGc5MJkhP1KpXiv95w0zHt35BeZppwvHyoThjjhZOWYMTUUmwZrkhCEtqsjp4giTC2pTVMCV4q19eJ72rlue2vMfrZvuuqqMOZ3AOl+DBDbThATrQBQwzeIZXeLMK68V6tz6WozWr2jmFP7A+fwBSSJQX</latexit> y i <latexit sha1_base64=\"P8jhd8f4ZvW3e7Acz0oCir4tgMU=\">AAAB/3icbVDNS8MwHE3n15xfVcGLl+AQPI1WBD0OvXic4D5gLSVNsy0sTUqSCqX24L/ixYMiXv03vPnfmG496OaDkMd7vx95eWHCqNKO823VVlbX1jfqm42t7Z3dPXv/oKdEKjHpYsGEHIRIEUY56WqqGRkkkqA4ZKQfTm9Kv/9ApKKC3+ssIX6MxpyOKEbaSIF95E2Qzr1QsEhlsbnyrCgCGthNp+XMAJeJW5EmqNAJ7C8vEjiNCdeYIaWGrpNoP0dSU8xI0fBSRRKEp2hMhoZyFBPl57P8BTw1SgRHQprDNZypvzdyFKsynZmMkZ6oRa8U//OGqR5d+TnlSaoJx/OHRimDWsCyDBhRSbBmmSEIS2qyQjxBEmFtKmuYEtzFLy+T3nnLdVru3UWzfV3VUQfH4AScARdcgja4BR3QBRg8gmfwCt6sJ+vFerc+5qM1q9o5BH9gff4AQ2uW5A==</latexit> <latexit sha1_base64=\"P8jhd8f4ZvW3e7Acz0oCir4tgMU=\">AAAB/3icbVDNS8MwHE3n15xfVcGLl+AQPI1WBD0OvXic4D5gLSVNsy0sTUqSCqX24L/ixYMiXv03vPnfmG496OaDkMd7vx95eWHCqNKO823VVlbX1jfqm42t7Z3dPXv/oKdEKjHpYsGEHIRIEUY56WqqGRkkkqA4ZKQfTm9Kv/9ApKKC3+ssIX6MxpyOKEbaSIF95E2Qzr1QsEhlsbnyrCgCGthNp+XMAJeJW5EmqNAJ7C8vEjiNCdeYIaWGrpNoP0dSU8xI0fBSRRKEp2hMhoZyFBPl57P8BTw1SgRHQprDNZypvzdyFKsynZmMkZ6oRa8U//OGqR5d+TnlSaoJx/OHRimDWsCyDBhRSbBmmSEIS2qyQjxBEmFtKmuYEtzFLy+T3nnLdVru3UWzfV3VUQfH4AScARdcgja4BR3QBRg8gmfwCt6sJ+vFerc+5qM1q9o5BH9gff4AQ2uW5A==</latexit> <latexit sha1_base64=\"P8jhd8f4ZvW3e7Acz0oCir4tgMU=\">AAAB/3icbVDNS8MwHE3n15xfVcGLl+AQPI1WBD0OvXic4D5gLSVNsy0sTUqSCqX24L/ixYMiXv03vPnfmG496OaDkMd7vx95eWHCqNKO823VVlbX1jfqm42t7Z3dPXv/oKdEKjHpYsGEHIRIEUY56WqqGRkkkqA4ZKQfTm9Kv/9ApKKC3+ssIX6MxpyOKEbaSIF95E2Qzr1QsEhlsbnyrCgCGthNp+XMAJeJW5EmqNAJ7C8vEjiNCdeYIaWGrpNoP0dSU8xI0fBSRRKEp2hMhoZyFBPl57P8BTw1SgRHQprDNZypvzdyFKsynZmMkZ6oRa8U//OGqR5d+TnlSaoJx/OHRimDWsCyDBhRSbBmmSEIS2qyQjxBEmFtKmuYEtzFLy+T3nnLdVru3UWzfV3VUQfH4AScARdcgja4BR3QBRg8gmfwCt6sJ+vFerc+5qM1q9o5BH9gff4AQ2uW5A==</latexit> <latexit sha1_base64=\"P8jhd8f4ZvW3e7Acz0oCir4tgMU=\">AAAB/3icbVDNS8MwHE3n15xfVcGLl+AQPI1WBD0OvXic4D5gLSVNsy0sTUqSCqX24L/ixYMiXv03vPnfmG496OaDkMd7vx95eWHCqNKO823VVlbX1jfqm42t7Z3dPXv/oKdEKjHpYsGEHIRIEUY56WqqGRkkkqA4ZKQfTm9Kv/9ApKKC3+ssIX6MxpyOKEbaSIF95E2Qzr1QsEhlsbnyrCgCGthNp+XMAJeJW5EmqNAJ7C8vEjiNCdeYIaWGrpNoP0dSU8xI0fBSRRKEp2hMhoZyFBPl57P8BTw1SgRHQprDNZypvzdyFKsynZmMkZ6oRa8U//OGqR5d+TnlSaoJx/OHRimDWsCyDBhRSbBmmSEIS2qyQjxBEmFtKmuYEtzFLy+T3nnLdVru3UWzfV3VUQfH4AScARdcgja4BR3QBRg8gmfwCt6sJ+vFerc+5qM1q9o5BH9gff4AQ2uW5A==</latexit> output Figure 1: The framework of the proposed SemiORC.",
"the black-box model does not meet the needs for actionable managerial insights.",
"Thus, we hope that this work, which aims at addressing common issues in financial NLP system, provides valuable design guidance for financial applications with a significant societal and economic impact.",
"We now proceed with the details of our model SemiORC, and the overall architecture is shown in Figure 1.",
"In a nutshell, SemiORC consists of an encoder, a decoder and a semi-supervised classifier.",
"Specifically, the encoder network combines the document representation and label embedding to learn latent variables of words.",
"The decoder is used to generate document representation based on these latent variables.",
"We model the semi-supervised classifier by the LSTM, the fully-connected layer, and the softmax function.",
"Problem Definition.",
"Let D = D l D u be a set of finance documents with labeled D l and unlabeled data D u .",
"Each labeled document D i D l is associated with a number of operational risks y i ( y ) , where y = { y 1 , y 2 , , y R } is a set of R risk labels (e.g., Data Privacy and Bank Prosecution, etc.).",
"We consider operational risk classification (ORC) problem that labels the unlabeled documents with possible operational risks, i.e., D i ( D u ) y i ( y ) .",
"Document Representation.",
"In SemiORC, we employ a Bidirectional LSTM (Bi-LSTM) (Hochre-iter and Schmidhuber, 1997; Schuster and Pali-wal, 1997) as the basic content learning module.",
"Let D i be the i -th document with K words and w i,k denotes the one-hot representation of the k th word.",
"We first embed the k -th word into low-dimensional vectors using an embedding matrix M : w i,k = w i,k M , where w i,k R d and d is the dimension of word embedding.",
"Then, we use the two-layer Bi-LSTMs as the document encoder to obtain the representation of k -th word by concatenating the forward and backward hidden states of the second Bi-LSTM layer: h k = LSTM ( h k 1 , w i,k ) , (1) h k = LSTM ( h k +1 , w i,k ) , (2) where h k = [ h k , h k ] R 2 d .",
"Then, we can obtain the i -th document representation D i RK 2 d by concatenating all words' representation in this document.",
"Meanwhile, we get two final states from two directions of the second Bi-LSTM layer: hidden state f i R 2 d and cell state m i R 2 d .",
"Label Embedding.",
"In order to efficiently leverage risk label information, we propose a useful way to encode labels into low dimensional vectors in the training process.",
"We first get label embedding matrix E i as follows: E i = (cid:26) Linear ( y i ) , if D i D l Classifier ( D i ) , if D i D u (3) where E i R d L i and L i is the number of y i .",
"y i are the observed operational risks of i -th document, and the Linear is a fully-connected layer.",
"The Classifier is a semi-supervised classifier, which can predict risk labels and learn the corresponding label embedding based on both labeled and unlabeled document representation.",
"Inspired by prior work (Rai et al., 2015; Yang et al., 2018; Wang et al., 2018), we incorporate two final states f i and m i into label embedding E i through another Bi-LSTM, which is beneficial to learn the specific label embedding of i -th document: E i = Bi-LSTM ( E i , ( f i , m i )) , (4) where E i R 2 d L i .",
"Multi-head Attention.",
"The document vector usually involves rich semantics in multiple semantic spaces.",
"However, the traditional attention mechanisms only focus on a specific semantic space of document representation to learn the weights of words, which ignores the influence of other semantic spaces.",
"In our work, we utilize the multi-head attention mechanism (Vaswani et al., 2017; Tao et al., 2018; Huang et al., 2019) to learn the weights of all words for the corresponding labels in each document.",
"We first project document representation D i and label embedding matrix E i to h different semantic spaces through different learnable projection matrices.",
"Then, we learn the weight matrices of words for the labels from these semantic spaces: D ( r ) i = D i P r , E ( r ) i = P (cid:62) r E i , (5) a ( r ) i = softmax ( D ( r ) i E ( r ) i ) , r = 1 , , h (6) where P r R 2 d (2 d/h ) is the r -th projection matrix, D ( r ) i RK (2 d/h ) , and E ( r ) i R (2 d/h ) L i .",
"a ( r ) i RK L i denotes the weight matrix of words for the corresponding labels at the r -th semantic spaces.",
"Besides, a i = 1 h (cid:80) h r =1 a ( r ) i is the average accumulated weight matrix of words.",
"Subsequently, we can learn latent variables of words from the document representation through the LSTM network.",
"Inspired by prior work (Xu et al., 2017), we combine the label embedding and the latent variables to generate the document representation through the Decoder: z i = LSTM [ sigmoid ( a i a (cid:62) i ) D i ] , (7) D i = Decoder [ z i + tanh ( Linear ( a i E (cid:62) i ))] , (8) where z i RK ( d/",
"2) , D i RK 2 d , and the Linear is another fully-connected layer.",
"The sigmoid and tanh are two activation functions.",
"We model the Decoder by the LSTM network.",
"Leveraging Unlabeled Financial Documents.",
"Various machine learning models, including SVM (Cesa-Bianchi et al., 2006), representation learning (Dai and Le, 2015), and adversarial training (Miyato et al., 2017), have been used to solve the semi-supervised text classification.",
"Recently, VAE-based methods have been successfully used in semi-supervised learning and utilize unlabeled data to model the generating process of underlying data (Kingma and Welling, 2014; Miao et al., 2016; Xie and Ma, 2019; Gururangan et al., 2019).",
"In addition, previous work (Xu et al., 2017) proposes to incorporate labels into the decoder RNN for better text classification performance.",
"In this work, we use the semi-supervised variational autoencoder (SemiVAE) (Kingma et al., 2014; Yang et al., 2019) to exploit these data, which provides an efficient way to approximate the posterior distribution of latent variables by deriving a lower bound for the marginal likelihood of the observed data (a.k.a. ELBO).",
"More specifically, we assume a latent variable z for generating the representation of finance document, whose true posterior distribution p ( z |D ) is usually too complicated to have an analytical form.",
"We alternatively resort to the distribution in an exponential family to approximate the true posterior: q ( z |D ) p ( z |D ) .",
"The ELBO on the marginal likelihood of the finance documents is as follows: log p ( D ) log p ( D ) KL[ q ( z |D ) (cid:107) p ( z |D )] = E q ( z |D ) [log p ( D| z )] KL [ q ( z |D ) || p ( z )] , (9) where q ( z |D ) is an approximation to the true posterior p ( z |D ) .",
"Since the objective is to minimize the KL divergence between q ( z |D ) and the true distribution p ( z |D ) we can alternatively maximize ELBO L ( D ) of log p ( D ) .",
"Our model consists of three components: an encoder network q ( z i | D i , y i ) , the decoder network p ( D i | y i , z i ) , and a semi-supervised classifier q ( y i | D i ) .",
"For each labeled finance data D i D l and its corresponding observed risk labels y i y , the ELBO L ( D l ) with corresponding latent variable z is as follows: log p ( D i , y i ) E q ( z i | D i , y i ) [log p ( D i | y i , z i )] + log p ( y i ) KL [ q ( z i | D i , y i ) || p ( z i )] = L ( D l ) , (10) where KL [ q ( z i | D i , y i ) || p ( z i )] is the KL divergence between the latent posterior q ( z i | D i , y i ) and the prior distribution p ( z i ) that should be minimized.",
"Note that we utilize the KL cost annealing method (Bowman et al., 2016; Snderby et al., 2016) to smooth the training process by gradually increasing the weight of KL cost from 0 to 1.",
"In the case of each unlabeled document D i D u , the corresponding risks y i are predicted by performing posterior inference with a probabilistic classifier q ( y i | D i ) .",
"We now have the following ELBO L ( D u ) , by considering possible risks y i as another latent variable: log p ( D i ) (cid:88) y i q ( y i | D i )( L ( D l ))+ H ( q ( y i | D i )) = L ( D u ) , (11) The ELBOL ( D ) on the marginal likelihood for the entire dataset is as follows: L ( D ) = (cid:88) D l L ( D l ) + (cid:88) D u L ( D u ) + E ( D i , y i ) D l [ log q ( y i | D i )] .",
"where the last term denotes an additional classification loss of classifier q ( y i | D i ) when learning from the labeled data with a weight controlling hyper-parameter .",
"Data Description.",
"Our proprietary dataset combines a set of 5,483 financial news articles, collected by a risk management team (with a focus on Asian-Pacific region) in an international bank-ing organization.",
"The financial news articles are collected from several online mainstream financial news outlets during Feb 1, 2019, to Mar 1, 2019.",
"The news outlets include government agency such as the Association of Certified Financial Crime Specialists (ACFCS) and news agency such as The Edge Markets and Japan Times.",
"We remove noise data (e.g., inserted advertising and specific symbol) of all finance documents.",
"There are eight Operational Risk categories in Tabel 1, as defined in Basel Accords.",
"The details of our dataset are as follows: 730 labeled documents; 4,753 unlabeled documents; the average number of risk labels and words for documents are 2.1 and 453, respectively.",
"Baselines.",
"We consider the following baselines.",
"Logistic Regression is a vanilla supervised classification baseline.",
"It only leverages labeled documents to build a text classifier and predict risk categories.",
"We also consider the following three semi-supervised learning baselines.",
"Transductive SVM (TSVM) (Joachims, 1999) is a widely used semi-supervised method that extends SVMs with the goal that there are a few unlabeled data near the margin as possible.",
"Semi-supervised Variational Autoencoder (SemiVAE) (Kingma et al., 2014) proposes to utilize a deep generative model to exploit unlabeled data.",
"Our model SemiORC uses SemiVAE as one key component.",
"Semi-supervised Sequential Variational Autoencoder (SSVAE) (Xu et al., 2017) proposes to use a mod-ified version of LSTM as the decoder and is the state-of-the-art semi-supervised model for text classification.",
"However, none of the above baselines can highlight keywords that are informative to prediction outcomes, since they are black-box semi-supervised learning models.",
"Lastly, we consider one ablation baseline ORC , which is a supervised version of our SemiORC.",
"It ignores unlabeled data for modeling document representation.",
"Evaluation Metrics.",
"We follow the standard evaluation metrics of multi-label classification, including hamming loss, accuracy and micro-F1 score.",
"Hamming-loss (Schapire and Singer, 1999) calculates the average Hamming distance between true labels and predicted labels.",
"Accuracy computes the subset accuracy between true labels and predicted labels.",
"Micro-F1 (Manning et al., 2008) returns a weighted average of precision and recall, which is computed from true positives, false negatives, and false positives.",
"Experimental Setting.",
"Our model SemiORC is implemented with Tensorflow on a machine with NVIDIA GeForce GTX 1080Ti.",
"Specifically, we optimize the training process of the model using Adam optimizer (Kingma and Ba, 2015) and dropout regularization (Srivastava et al., 2014; Gal and Ghahramani, 2016).",
"We set the number of projection matrices and the dimension of word embedding as h = 4 and d = 64 .",
"The learning rate and weight parameter are empirically tuned to 0.001 and 2, respectively.",
"The dropout rate is scaled from 0.3 to 0.7.",
"For Logistic Regression and TSVM, we both use doc2vec (Le and Mikolov, 2014) to learn the finance document representation.",
"Additionally, we leverage the scikit-learn (Pedregosa et al., 2011) to build two text classifiers to predict the corresponding risk labels.",
"For SemiVAE and SSVAE, we model the encoders, the decoders, and the classifiers by the LSTM networks.",
"Experiment Results.",
"We perform 10 runs of 10-fold cross-validation on the dataset for each method.",
"Table 2 reports the overall classification performance on three metrics.",
"We can see that SemiORC achieves the best classification per-Method Hamming-loss Accuracy Micro-F1 score Logistic Regression 0.156( 0 . 030 ) 0.406( 0 . 026 ) 0.510( 0 . 031 ) TSVM 0.135( 0 . 027 ) 0.392( 0 . 037 ) 0.493( 0 . 036 ) SemiVAE 0.106( 0 . 024 ) 0.417( 0 . 031 ) 0.595( 0 . 020 ) SSVAE 0.097( 0 . 019 ) 0.457( 0 . 026 ) 0.621( 0 . 022 ) ORC 0.105( 0 . 013 ) 0.443( 0 . 022 ) 0.601( 0 . 028 ) SemiORC 0.084 ( 0 . 018 ) 0.529 ( 0 . 020 ) 0.651 ( 0 . 023 ) Table 2: Overall operational risk classification results.",
"formance in all three metrics.",
"Compared with SSVAE, SemiORC improves the Hamming-loss by 13.4% (0.097 vs. 0.084), Accuracy by 15.7% (0.457 vs. 0.529), Micro-F1 score by 4.8% (0.621 vs. 0.651).",
"Compared with the pioneer semi-supervised learning model SemiVAE, SemiORC improves the Hamming-loss by 20.7% (0.106 vs. 0.094), Accuracy by 21.1% (0.417 vs. 0.529), and Micro-F1 score by 24.3% (0.493 vs. 0.651).",
"The key difference between SemiORC and SSVAE or SemiVAE is that we leverage the multi-head attention mechanism to learn the weights of informative words which better encodes labeled and unlabeled documents.",
"Moreover, we can conclude that utilizing unlabeled data can significantly improve model performance (ORC vs. SemiORC).",
"Considering that the current risk management team in the bank only utilizes labeled data, this improvement is quite significant and should be emphasized.",
"Transparent Operational Risk Prediction.",
"In financial institutions, risk officers are strictly required to comply with regulations and be responsible for any decisions that they make.",
"Therefore, in order for the operation risk prediction system to be useful, it calls for transparency in the text classification system.",
"SemiORC highlights keywords that are informative to each predicted risk type, as shown in Table 3.",
"Take the last document the anti-money laundering act mandates that each citizen links identify number for example.",
"It is predicted to be multiple labels (category 5, 6 and 7).",
"By examining the highlighted keywords, we can see word anti-money has the highest attention weight under category 6 while identity has the highest attention weight under category 7.",
"In other words, each predicted label is associated with a set of label-related keywords, which provides a visual explanation of why a financial news article is assigned to a specific risk category.",
"The label-dependent attention words allow risk officers to screen out the news articles efficiently and to assess the operational risk categories accurately.",
"To conclude, in this paper, we work on a significant practical problem in the financial industry: operational risk prediction.",
"We design a text classification framework with the multi-head attention mechanism and SemiVAE.",
"In sum, our framework aims to address two common issues in the financial industry: lacking labeled data and the need for transparency in prediction outcomes.",
"This work was supported by the National Natural Science of China under Grant No.61602097 and No.61472064."
] | [
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"method",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"objective",
"other"
] |
[
"Spoken language understanding, usually including intent detection and slot filling, is a core component to build a spoken dialog system.",
"Recent research shows promising results by jointly learning of those two tasks based on the fact that slot filling and intent detection are sharing semantic knowledge.",
"Furthermore, attention mechanism boosts joint learning to achieve state-of-the-art results.",
"However, current joint learning models ignore the following important facts:",
"1. Long-term slot context is not traced effectively, which is crucial for future slot filling.",
"2. Slot tagging and intent detection could be mutually rewarding, but bidirectional interaction between slot filling and intent detection remains seldom explored.",
"In this paper, we propose a novel approach to model long-term slot context and to fully utilize the semantic correlation between slots and intents.",
"We adopt a key-value memory network to model slot context dynamically and to track more important slot tags decoded before, which are then fed into our decoder for slot tagging.",
"Furthermore, gated memory information is utilized to perform intent detection, mutually improving both tasks through global optimization.",
"Experiments on benchmark ATIS and Snips datasets show that our model achieves state-of-the-art performance and outperforms other methods, especially for the slot filling task.",
"Task-oriented dialogue systems have attracted sig-nificant attention, which have been greatly advanced by deep learning techniques.",
"Traditionally, these dialog systems have been built as a pipeline, with modules including spoken language understanding (SLU), dialog state tracking, action selection and language generation.",
"Among these problems, SLU, including intention detection and slot filling (Tur and Mori, 2011), is a key yet challenging problem to parse users' utterances into se-Sentence Flights from Irvine to Seattle Intent Flight Slots O O B-fromloc O B-toloc Table 1: An example utterance annotated with its intent and semantic slots (IOB format).",
"mantic frames in order to capture a conversation's core meaning.",
"Traditionally, intention detection is treated as a classification problem, whereas slot filling is usually defined as sequence labeling problem, where In-Out-Begin (IOB) format is applied for representing slot tags as illustrated in Table",
"1. Given an utterance, SLU determines users' intention and maps it into predefined semantic slots.",
"The input is a sequence of words, and the output is a sequence of predefined slot IDs.",
"A specific intent is assigned for the whole sentence.",
"In the traditional pipeline approach, intent detection and slot filling are implemented separately.",
"However, separate modeling of those two tasks is insufficient to take full advantage of all supervised signals, as they share semantic knowledge.",
"For example, if the intent of an utterance is \"find_a_flight\", it is more likely to contain slots \"de-parture_city\" and \"arrival_city\" rather than \"restau-rant_name\".",
"Another drawback of the pipeline method is that errors made in upper stream modules may propagate and be amplified in downstream components, which however could possibly be eased in joint model (Zhang and Wang, 2016).",
"Recently, joint model for intent detection and slot filling has been proposed and achieved promising results (Liu and Lane, 2016; Goo et al., 2018; Li et al., 2018).",
"Though achieving promising performance, their models suffer from two major issues: 1) Modeling of slot context.",
"Though the latent memory of RNNs can model history information, they are inherently unstable over long time sequences because the memories are the RNN hidden states.",
"(Weston et al., 2014) observes that RNNs tend to focus more on short-term memories and forcefully compress historical records into one hidden state vector.",
"Thus, simple RNNs cannot preserve long-term slot context of the conversation, which is crucial to future slot tagging.",
"2) Bi-directional interaction between slot filling and intent detection.",
"The majority of joint modeling work has studied how to utilize intent information to improve slot filling performance.",
"However, the beneficial impact of slot information on intent detection is mostly ignored.",
"In fact, slots and intents are closely correlative, thus mutually reinforcing each other.",
"In this paper, we propose a new framework to jointly model intent detection and slot filling in order to achieve a deeper level of semantic modeling.",
"Specifically, our model is distinguished from previous work primarily in two ways.",
"Model slot context dynamically with Key-Value Memory Networks (KV-MNs).",
"The majority of existing work use RNNs to track slot values mentioned in previous utterances.",
"However, RNNs tend to focus more on short-term memories.",
"We propose to use a memory network to model slot context information as external knowledge which is acting a global information to guide slot tagging.",
"Instead of relying on the compressed vector in RNN, KV-MNs store different historical slot tag information separately in different memory slots, which enriches the representation capacity compared with RNNs.",
"Furthermore, slot values mentioned in the utterance are dynamically tracked, which is beneficial for subsequent slot tagging at each timestamp.",
"Lastly, slot-level attention can model more accurately the contribution of each word in an utterance to slot tagging.",
"Model the mutual interaction between intent detection and slot filling.",
"The fact that intent detection and slot filling are semantically related is well-observed and how to use intent information to boost slot filling is widely explored.",
"However, slot filling is beneficial to intent detection as well, and these benefits are yet to be explored.",
"We propose a gating mechanism between intents and slots based on KV-MNs in order to model the interaction between intent detection and slot filling.",
"Since intent detection can be treated as an utterance classification problem, different classification methods, such as support vector machines (SVM) and RNNs (Haffner et al., 2003; Sarikaya et al., 2011), have proposed to solve it.",
"On the other hand, for slot filling, hidden markov models (HMM) and conditional random fields (CRF) (Lee et al., 1992; Ye-Yi Wang et al., 2005; Raymond and Riccardi, 2007) were used to solve slot filling problem.",
"Later RNN based methods had become popular.",
"For example, Yao et al. (2013); Mesnil et al. (2015) employed RNNs for sequence labeling in order to perform slot filling.",
"Alternatively, intent detection and slot filling can be done jointly to overcome the error propagation.",
"Zhang and Wang (2016) first proposed joint work using RNNs for learning the correlation between intent and slots.",
"Hakkani-Tr et al. (2016) adopted a RNN for slot filling and the last hidden state of the RNN was used to predict the utterance intent.",
"Liu and Lane (2016) introduced an attention-based RNN encoder decoder model to jointly perform intent detection and slot filling.",
"An attention weighted sum of all encoded hidden states was used to predict the utterance intent.",
"All those models outperform the pipeline models via mutual enhancement between two tasks.",
"Most recently, some work tries to model the intent information for slot filling explicitly in the joint model.",
"Goo et al. (2018); Li et al. (2018) proposed the gate mechanism to explore incorporating the intent information for slot filling.",
"However, as the sequence becomes longer, it is risky to simply rely on the gate function to sequentially summarize and compress all slots and context information in a single vector (Cheng et al., 2016).",
"Wang et al. (2018) proposed the bi-model to consider the cross-impact between the intent and slots and achieve state-of-the-art results.",
"Zhang et al. (2018) proposed a hierarchical capsule neural network to model the hierarchical relationship among word, slot, and intent in an utterance.",
"Niu et al. (2019) introduces a SF-ID network to establish the interrelated mechanism for slot filling and intent detection tasks.",
"Compared with their work, our method explicitly models the long-term slot context knowledge which is beneficial to both slot filling and intent detection.",
"Memory network provides a principled approach for modeling long-range dependency which has advanced many NLP tasks such as machine translation (Wang et al., 2016) and question answering (Sukhbaatar et al., 2015).",
"The initial framework of memory networks was proposed by Weston et al. (2014).",
"Following the idea, Sukhbaatar et al. (2015) proposed an end-to-end memory augmented model that significantly reduced the requirement of supervision during training.",
"Key-value memory network (Miller et al., 2016) encoded prior knowledge by introducing a key memory structure which storeed facts to address to the relevant memory value.",
"None of them is to model slot context information dynamically especially in single turn conversational systems.",
"In this paper, we demonstrate how memory networks can be used to model long-term slot context knowledge and the interaction between intent detection and slot filling.",
"Memory networks show promising results on learning long-range dependency, but they are insensitive to represent temporal dependencies between memories (Wu et al., 2018).",
"RNNs tend to be opposite.",
"Thus, it makes sense for us to combine those networks together to model long-term slot context information.",
"In this section, we present a specific key-value dynamic memory module to collect and remember slot clues in the dialog context.",
"Then context memory is used to enhance the Encoder-Decoder based model to perform slot filling and intent detection.",
"Given a single-turn dialog, the Encoder transforms a word in user utterances into a dense vector by using a shared self-attentive encoder.",
"Then the memory network encodes long-term slot context information by incorporating historical slot tags through memory attention and WRITE operations of the memory network.",
"The slot decoder integrates short-term hidden state of self-attention encoder and the long-term slot context generated by attentively reading the VALUE-MEMORY to generate slot tagging at each timestamp.",
"Later, intent decoder performs token level intent detection, which is seen as a coarse-grained intent detection result.",
"Finally, a fine-grained intent detection is produced by gating memory modules.",
"Both intent detection and slot filling are optimized simultaneously via a joint learning scheme.",
"Given an input utterance X = ( x 1 , x 2 , . . . , x T ) of T words, where each word is initially represented by a vector of dimension d , the BiLSTM (Hochre-iter and Schmidhuber, 1997) is applied to learn representations of each word by reading the input utterance forward and backward to produce context sensitive hidden states H = ( h 1 , h 2 , . . . , h T ) :",
"Then, we use self-attention mechanism to capture the contextual information for each token.",
"We adopt the method proposed by (Vaswani et al., 2017), where we first map the matrix of input vectors X RT d to queries ( Q ), keys ( K ) and values ( V ) matrices by using different linear projections and the self-attention output C RT d 1 is: C = softmax (cid:32) Q K (cid:62) d 2 (cid:33) V (2) where d 1 and d 2 represents self-attention dimension and keys'dimension.",
"We concatenate the output of self-attention and BiLSTM as the final encoding representation as shown in Qin et al. (2019): E = H C (3) where E = ( e 1 , . . . , e T ) RT ( d + d 1 ) and is a concatenation operation.",
"Our slot deocder consists of two components: 1) the key-value memory-augmented attention model which generates slot context representation of users' utterance, and 2) the unidirectional LSTM decoder, which predicts the next slot tag step by step.",
"To overcome the shortcomings of RNNs in capturing semantic clues over the long-term, we design a memory network that can preserve fine-grained semantic information of long-term slot context.",
"We adopt a key-value memory network, which memorizes information by using a large array of external memory slots.",
"The external memories enrich the representation capability compared with hidden vectors of RNNs and enable the KV-MNs to capture long-term data characteristics (Liu and Perez, 2017).",
"We aim to incorporate the knowledge contained in the historical slot tags into the memory slots.",
"The KV-MNs decompose slot semantics in an utterance into different slot categories and thus preserves more fine-grained information.",
"In KV-MNs, a memory slot is represented by a key vector and an associated value vector.",
"KEY-MEMORY: The KEY-MEMORY K R d k n learns latent correlation between utterance words and slot tags, where n is the number of memory slots and d k is the dimension of each slot.",
"Each column vector, that is, i -th key vector k i R d k is set to the i th column of the KEY-MEMORY K , which is shared by all conversation turns and fixed during the processing of word sequences.",
"VALUE-MEMORY: Both the KEY-MEMORY and VALUE-MEMORY have the same number of memory slots.",
"Each value memory vector stores the value of slot tag mentioned in the utterance.",
"We form a value memory matrix V t R d v n by combining all n value slots.",
"Different from KEY-MEMORY K , VALUE-MEMORY V t is word-specific and is continuously updated according to the input word sequence.",
"During the conversation, the value of a new slot tag may be added into the VALUE-MEMORY, and an old value can be erased.",
"In this way, we can adequately capture the slot context information on each mentioned slot.",
"Two types of operations, READ and WRITE , are designed to manipulate the value memories.",
"As shown in Figure 1, the decoder uses the aligned BiLSTM hidden state h t as a query to address the KEY-MEMORY looking for an attention vector a t , and attentively reads the VALUE-MEMORY to generate slot context representation c t .",
"First, we use h t to address the KEY-MEMORY to find an accurate attention vector a t .",
"a t = Address ( h t , K ) (4) a t is subsequently used as the guidance for reading the VALUE-MEMORY V t 1 to get the slot context representation c t .",
"c t = Read ( a t , V t 1 ) (5) c t works together with the aligned encoder hidden state e t to generate the new decoder state at the decoding step t , h St = LSTM (cid:0) h St 1 , y St 1 , e t c t (cid:1) (6) where h St 1 is the previous slot decoder state and y St 1 is the previous emitted slot lable distribution.",
"After that, we use the slot decoder hidden state h St to update V t : V t = Write (cid:0) h St , V t 1 (cid:1) (7) Finally, the decoder state h St is utilized for slot filling: y St = softmax (cid:0) W Sh h St (cid:1) (8) o St = argmax (cid:0) y St (cid:1) (9) where W Sh are trainable parameters and o St is the slot label of the word at timestamp t in the utterance.",
"Different than most existing work where intent information is used to do slot filling, our framework is directly leveraging the explicit slot context information to help intent detection.",
"Furthermore, a gated mechanism is used in order to effectively incorporate slot memory information into intent detection.",
"By performing gated intent detection, there are two advantages:",
"1. Sharing slot context information with intent detection improves intent detection performance since those two tasks are related.",
"Furthermore, a gating mechanism which combines the intent detection information and slot context retrieved from key-value memory, regulates the degree of enhancement of intent detection to prevent information overload.",
"2. Through shared key-value memory, the interaction between intent detection and slot filling can be effectively modeled and executed.",
"Plus, by jointly training those two tasks, not only can intent detection performance be improved by slot context knowledge, but also slot filling is enhanced by minimizing intent detection objective function.",
"In other words, by learning optimal parameters of shared key-value memory, slot filling and intent detection interact in a more effective and deeper way.",
"Intent Detection Decoder: For intent detection, we use another uni-directional LSTM as the intent detection network.",
"At each decode step t , the decoder state h It is generated by the previous decoder state h It 1 , the previous emitted intent label distribution y It 1 and the aligned encoder hidden e t .",
"Gated Memory: We propose a gated mechanism to integrate slot context with intent detection.",
"The gate regulates the degree of slot context information to feed into the intent detection task and prevent information from overloading.",
"As shown in Figure 2, the gate G is a trainable fully connected network with sigmoid activation.",
"Then, the output of gated decoder state h (cid:48) It is utilized for intent detection: y It = softmax (cid:16) W Ih h (cid:48) It (cid:17) (12) o It = argmax ( y It ) (13) where y It is the intent output distribution of the t -th token in the utterance, o It represents the intent lable of t -th token and W Ih are trainable parameters of the model.",
"The final utterance result OI is generated by voting from all token intent results as illustrated in Qin et al. (2019).",
"KEY-MEMORY Address: K R d k n denotes the KEY-MEMORY at decoding time step t .",
"The addressed attention vector is given by a t = Address ( h t , K ) (14) where a t R n specifies the normalized weights assigned to the slots in K , with j -th slot being k j .",
"The attention weights a t,j are calculated based on the correlation between h t and k j : a t,j = exp ( e t,j ) (cid:80) ni =1 exp ( e t,i ) (15) where e t,j = k (cid:62) j ( W a h t + b a ) VALUE-MEMORY Read: V t R d v n denotes the VALUE-MEMORY at decoding time step t .",
"The output of reading the value memory V t is given by c t = n (cid:88) j =1 a t,j v t,j (16) VALUE-MEMORY Write: Similar to the attentive writing operation of neural turing machines (Graves et al., 2014), we define two types of operation for updating the VALUE-MEMORY: FORGET and ADD.",
"FORGET determines the content to be removed from memory slots.",
"More specifically, the vector F t R d v specifies the values to be forgotten or removed on each dimension in memory slots, which is then assigned to each memory slot through normalized weights a t .",
"We use the slot decoder hidden state h St to update V t 1 .",
"Formally, the memory after FORGET operation is given by v t,i = v t 1 ,i ( 1 a t,i F t ) , i = 1 , 2 , . . . , n (17) where F t = ( WF , h St ) is parameterized with WF R d v d h , and stands for the Sigmoid activation function, and F t R d v ; a t R n specifies the normalized weights assigned to the key memory slots in K , and a t,i represents the weight associated with the i -th memory slot.",
"v t,i = v t,i + a t,i A t , i = 1 , 2 , . . . , n (18)",
"where A t = ( WA , h St ) is parameterized with WA R d v d h and A t R d v .",
"By learning the parameters of FORGET and ADD layers, our model can automatically determine which signal to weaken or strengthen based on input utterance words.",
"L 1 (cid:44) m (cid:88) j =1 n I (cid:88) i =1 y I,ij log (cid:16) y I,ij (cid:17) (19) and L 2 (cid:44) m (cid:88) j =1 n S (cid:88) i =1 y S,ij log (cid:16) y S,ij (cid:17) (20)",
"where y I,ij and y S,ij are the gold intent label and gold slot label respectively, m is the number of words in a word sequence, and n I and n S are the number of intent label types and the number of slot tag types, respectively.",
"Finally the joint objective is formulated as weighted-sum of these two loss functions using hyper-parameters and : L = L 1 + L 2 (21) Through joint training, the key-value memory shared by those two tasks can learn the shared representations and interactions between them, thus further promoting each other's performance and easing the error propagation compared with pipeline models.",
"To evaluate our proposed model, we conduct experiments on two widely used benchmark datasets, ATIS (Airline Travel Information System) and Snips.",
"Both datesets used in our paper follow the same format and partition as in Goo et al. (2018).",
"ATIS dataset (Hemphill et al., 1990) contains audio recordings of people making flight reservations.",
"The training set has 4,478 utterances and the test set contains 893 utterances.",
"We use another 500 utterances for the development set.",
"There are 120 slot labels and 21 intent types in the training sets.",
"To justify the generalization of our proposed mode, we also execute our experiment on another NLU dataset collected by Snips (Coucke et al., 2018) 1 .",
"This data is collected from the Snips personal voice assistant, where the number of samples for each intent is approximately the same.",
"The training set contains 13,804 utterances and the test set contains 700 utterances.",
"We use another 700 utterances as the development set.",
"There are 72 slot labels and 7 intent types.",
"Compared to single-domain ATIS dataset, Snips is more complicated mainly due to the intent diversity and large vocabulary (Goo et al., 2018).",
"For example, GetWeather and BookRestaurant in Snips are from different top-ics, resulting in a larger vocabulary.",
"On the other hand, intents in ATIS are all about flight information with similar vocabularies.",
"In our experiments, we set the dimension of word embedding to 256 for ATIS and 200 for Snips dataset.",
"L2 reularization used in our model is 1 10 6 and dropout ratio is set to 0.4 for reducing overfit.",
"The number of memory columns is set to 20 for both datasets, and the dimensions of memory column vectors are set to 64 for ATIS, and to 200 for Snips.",
"The optimizer is Adam (Kingma and Ba, 2014).",
"During our experiments, we select the model which works the best on the development set, and then evaluate it on the test set.",
"We carefully choose some representative works, for example, Joint Seq.",
"(Hakkani-Tr et al., 2016), Attention BiRNN (Liu and Lane, 2016), Sloted-Gated (Goo et al., 2018), CAPSULE-NLU (Zhang et al., 2019), SF-ID Network (Niu et al., 2019) and Stack-Propagation (Qin et al., 2019) as our baselines.",
"When doing the comparison, we adopt the reported results from those papers directly.",
"In order to have fair comparison with others' work, we adopt the same metrics to evaluate our model.",
"That is, we evaluate slot filling using F1 score, intent prediction using accuracy, and sentence-level semantic frame parsing using whole frame accuracy.",
"Table 2 shows the experiment results of the proposed model on ATIS and Snips datasets.",
"From the table, we can see that our model outperforms all the baselines in all three aspects: slot filling (F1), intent detection (Acc) and setence accurancy (Acc), demonstrating that explicitly modeling slot context and strong relationships between slots and intent can benefit SLU effectively from the key-value memory.",
"In the ATIS dataset, compared with the best prior joint work Stack-Propagation (Qin et al., 2019), we achieve F1 score as 96.13 which is even slightly better than Stack-propagation's F1 score (96.10) with BERT model.",
"This signifies that our key-value memory can not only capture long-term slot context, but also model correlation between slot filling and intent detection, which can be further optimized by joint training.",
"What's more, in the Snips dataset, our model achieves good results in both slot filling and overall sentence.",
"Specifically, slot filling was improved by almost 1.0%, and sentence accuracy by 1.4%.",
"Generally, ATIS dataset is a simpler SLU task than Snips, and so the room to be improved is relatively small.",
"On the other hand, Snips is more complex so that it needs more complicated model to capture long-term context and share the knowledge across different top-ics.",
"In this section, we explore how each component contributes to our full model.",
"Specifically, we ablate three important scenarios and conduct them in this experiment.",
"Note that all the variants are based on joint learning.",
"Without key-value memory and gating architecture for integrating slot context information with intent detection.",
"This is the model similar to Qin et al. (2019).",
"Only with key-value memory, but without sharing slot context information with intent detection.",
"With key-value memory and sharing, but without gating architecture, where only key-value memory is applied to model slot context and that information is directly fed into intent detection.",
"Table 3 shows the joint learning performance of our model on ATIS and Snips datasets by removing one component at one time.",
"First, if we remove key-value memory and gating architecture, the performance drops dramatically compared with our proposed model.",
"This is expected as it does not have any of our improvements.",
"Then we only consider key-value memory to model slot context.",
"From Table 3, we can see that key-value memory does improve performance in a large scale.",
"The result can be interpreted as indicating that key-value memory learns long-term slot context representation effectively, which does compensate the weakness of RNN.",
"In the following, we apply key-value memory and also share it with intent detection without gating.",
"It is noticeable that SLU performance is enhanced further.",
"Sharing slot context information with intent detection not only improves intent accuracy, but also betters slot filling through joint optimization.",
"Finally, when we add gating mechanism, the performance improves further.",
"We attribute this to gating mechanism that regulates the degree of slot context information to feed into intent detection task and prevent information from overloading.",
"We also study how the number of memory slots and the dimension of memory slots impacts SLU performance.",
"Figure 3 shows the performance change with different hyper-parameters.",
"We found that the optimal size of memory slots for ATIS and Snips dataset is 20, whereas the optimal dimension of memory slots is 64 for ATIS and 200 for Snips respectively.",
"Analyzing the attention weights has been frequently used to show the memory read-out, since it is an intuitive way to understand the model dynamics.",
"Figure 4 shows the attention vector for each decoded slot, where each row represents attention vector a t .",
"Our model has a sharp distribution over the memory, which implies that it is able to select the most related memory slots from the value memory.",
"For example, when decoding \"san\", our model selects memory slot 1, 7, 8,15 from the value memory to read context information, where memory slot 7 and 15 are representing word \"from\" and memory slot 1 representing word \"flight\".",
"In other words, words \"flight\" and \"from\" contribute more than other previous words in order to decode \"san\" to B-fromloc.city_name.",
"4: Key memory attention visualization from the ATIS dataset",
"In this paper, we propose a joint model to perform spoken language understanding with an augmented key-value memory to model slot context in order to capture long-term slot information.",
"In addition, we adopt a gating mechanism to incorporate slot context information for intent classification to improve intent detection performance.",
"Reciprocally, joint optimization promotes slot filling performance further by memory sharing between those two tasks.",
"Experiments on two public datasets show the effectiveness of our proposed model and achieve state-of-the-arts results."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective"
] |
[
"Document clustering requires a deep understanding of the complex structure of long-text; in particular, the intra-sentential ( local ) and inter-sentential features ( global ).",
"Existing representation learning models do not fully capture these features.",
"To address this, we present a novel graph-based representation for document clustering that builds a graph autoencoder (GAE) on a Keyword Correlation Graph .",
"The graph is constructed with topical keywords as nodes and multiple local and global features as edges.",
"A GAE is employed to aggregate the two sets of features by learning a latent representation which can jointly reconstruct them.",
"Clustering is then performed on the learned representations, using vector dimensions as features for inducing document classes.",
"Extensive experiments on two datasets show that the features learned by our approach can achieve better clustering performance than other existing features, including term frequency-inverse document frequency and average embedding.",
"Text classification is a core task in natural language processing (NLP) with a variety of applications, such as news topic labeling and opinion mining.",
"Supervised methods for text classification generally perform better than unsupervised clustering methods, at the cost of heavy annotation efforts.",
"In contrast, unsupervised clustering methods have the advantage in terms of requiring less prior knowledge and can be used to discover new classes when relevant training data is not available.",
"The performance of text clustering is closely related to the quality of its feature representation.",
"While sentence-level clustering relies primarily on the local , intra-sentential features, document-level clustering also needs the global , inter-sentential features.",
"Existing representation learning methods that model text as a bag-of-words (e.g., term frequency-inverse document frequency, TFIDF) or as sequences of variable-length units (e.g., Bidirectional Encoder Representations from Transformers, BERT) (Devlin et al., 2019) are ineffective in capturing global features across long sequences suffering from heavy computational cost as a result of high dimensionality and complex neural network architectures, as reported by Ye et al. (2017) and Jawahar et al. (2019).",
"Recently, graph neural networks have been used to provide features for NLP applications, including text classification (Yao et al., 2019) and relation extraction (Sahu et al., 2019).",
"By modeling text in a topological structure, these models can encode global information in long-range words.",
"Despite their usefulness, graph models remain under-explored in document clustering.",
"In this work, we propose a novel graph-based representation for document clustering by utilizing a graph autoencoder (GAE) (Kipf and Welling, 2016) on a Keyword Correlation Graph (KCG).",
"Our KCG represents a document as a weighted graph of topical keywords.",
"Each graph node is a keyword, and sentences in the document are attached to the nodes they are related to.",
"The edges between nodes indicate their correlation strength, which is determined by comparing their corresponding sets of sentences.",
"The node and edge features in the KCG are encoded using a GAE, and the encoded features are used to infer document classes.",
"Our contribution is threefold.",
"First, we propose a KCG, which can capture the complex relations among words and sentences in long text.",
"Second, we propose a new graph-based representation for document clustering.",
"To the best of our knowledge, this is the first attempt to use GAEs to jointly learn local and global features for document clustering.",
"Last, an analysis of the individual model components indicates that our model can effectively encode both sets of features.",
"This distinguishes us from existing sequence-level representations which generally better encode the former than the latter.",
"In the literature, three common neural methods, Convolutional neural network (CNN), Recurrent neural network (RNN) and Transformer, have been proposed to model the sequence-level features between words.",
"CNNs have been shown to be more effective in capturing features in short text (e.g. phrases) than in long sequences (Xu et al., 2015).",
"In contrast, RNN is suitable for handling sequential input (Zhou et al., 2019).",
"It aims at modelling the relations between the current word and all the previous ones in the sequence as a whole.",
"Unlike RNN and CNN, which model a text sequence either from left to right or combined left-to-right and right-to-left, Transformer operates on the masked language model that predicts randomly-masked words in consecutive sentence pair.",
"Nonetheless, these approaches only model the context on consecutive words/sentences, neglecting many global features that span across non-consecutive text units in multiple sentences.",
"Several methods have been proposed to represent documents as graphs.",
"These document graphs can be induced directly from the input document, using its words, sentences, paragraphs or even the document itself as nodes (Defferrard et al., 2016), and establishing edges according to the distributional information such as, word co-occurrence frequencies (Yao et al., 2019; Peng et al., 2018), text similarities (Putra and Tokunaga, 2017) and hyperlinks between documents (Page et al., 1999).",
"Alternatively, document graphs can be constructed indirectly with the use of NLP pipelines and knowledge bases such as WordNet (Miller, 1995) for identifying the entities in the document, as well as their syntactic and semantic relations (Sahu et al., 2019; Li et al., 2019).",
"However, such type of approaches are limited to resource-rich languages.",
"We describe our model architecture in Figure",
"1. It includes three steps.",
"Given a document, the model first constructs a KCG with keywords as nodes and edges correspond to their local and global features.",
"Next, it uses a GAE to encode the two feature sets Figure 1: Proposed model architecture.",
"Finally, clustering is performed on the encoded representations, using vector dimensions as features for inducing document classes.",
"The KCG construction involves 4 steps: Given a document, KCG first uses Non-Negative Matrix Factorization (NMF) (Fevotte and Idier, 2011; Cichocki and Phan, 2009) to extract the top-50 keywords of each document as nodes.",
"1 Second, each sentence in the document are mapped to the node it is most related to.",
"2 Thus, each node will have its own sentence sets .",
"An example is shown in Figure",
"2. Then, we generate embeddings for each sentence in the set (referred to as sentence set embeddings henceforth).",
"They will be served as features of the nodes.",
"Last, edges between nodes are established by measuring the correlations of their corresponding sentence sets.",
"1 Earlier approaches used mature NLP pipelines (e.g., named entity recognizer) for keyword extraction (Li et al., 2019; Liu et al., 2019).",
"Instead, we use unsupervised NMF for keyword extraction.",
"We tested with top-10, 20, 50, 100 keywords on Latent Dirichlet allocation (LDA) (Blei et al., 2003; Hoffman et al., 2010) and NMF.",
"We found that using NMF to extract top-50 keywords gives the best clustering result.",
"2 We map sentences and keywords based on the cosine similarity between their TFIDF features Node Feature: We represent each keyword node as the average of its sentence set embeddings.",
"A range of wordand sentence-level embeddings, including Global Vector (GloVe) (Pennington et al., 2014), BERT, Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) and Embeddings from Language Models (ELMo) (Peters et al., 2018), are tested (see Section 5.1).",
"Word co-occurrence edge: The distributional hypothesis suggests that similar (key)words appear in similar contexts (Firth, 1957).",
"Thus, the co-occurrence rate between two keywords reveals helpful clues for their relatedness.",
"For this, we connect two keywords by their co-occurrence frequencies in sentences.",
"Sentence similarity edge: To estimate the global correlation between two keywords, we calculate the mean pairwise (cosine) similarity between their sentence embedding sets.",
"Two keywords will have a high edge weight if their sentence set embeddings are similar.",
"Sentence position edge: The position of a word in the document can be an indicator of its importance.",
"For example, topical keywords and sentences tend to appear in the beginning of the text (Lin and Hovy, 1997).",
"Hence, we connect two keywords by computing the average position of their sentence sets in text.",
"If two keywords both appear early in text, they will have a high edge weight.",
"Details are described in the Appendix.",
"KCG captures the local and global features in documents using text embeddings and adjacency edges.",
"After that, we compute the representation of each document by applying a GAE on the KCG.",
"The GAE is an advanced version of the autoencoder for graph encoding, under an encoder-decoder framework.",
"For each node in the KCG, the encoder aims to extract the latent features that can reconstruct the graph using the decoder.",
"This way, the GAE learns to encode global information about (keyword) nodes that are multiple-hops away in the KCG.",
"To capture the global features, while preserving the local ones, we use a Multi-Task GAE ( MTGAE ), whose objective is to jointly learn the latent representation that can reconstruct both the input graph and node features (Tran, 2018a,b).",
"In Section 5.1, we will compare MTGAE performance with the GAE, the Variational Dataset Size #Classes Avg.",
"GAE ( VGAE ) (Kipf and Welling, 2016), and a generic sequence-level autoencoder ( AE ) (Hinton and Salakhutdinov, 2006).",
"The model settings are described in the Appendix.",
"After we encode the KCG features for each node, we employ global average pooling over the node sequence to get a fixed-length representation of the document.",
"We then apply the Spectral Clustering algorithm, on these representations to group documents into classes.",
"3 Spectral Clustering has wide applications in similar NLP tasks that involve high-dimensional feature spaces (Xu et al., 2015; Belkin and Niyogi, 2002; Xie and Xing, 2013).",
"We use two preprocessed datasets, Reuters-21578 (Reuters) (Lewis et al., 2004) and 20Newsgroups (20NG) (Lang, 1995), as provided by Ye et al. (2017) for long-text clustering.",
"Their statistics are listed in Table",
"1. Following previous work (Ye et al., 2017; Xie and Xing, 2013; Xu et al., 2015), we use two sets of metrics to assess the quality of clusters: (1) Adjusted Mutual Information (AMI) (Vinh et al., 2010); and (2) Accuracy (ACC).",
"Their descriptions are included in the Appendix.",
"We compare our model with multiple cutting-edge text clustering and representation models, as reported by Ye et al. (2017) and Xie and Xing (2013).",
"These include K-means on TFIDF models, Discrete Distribution Clustering on Skipgram embeddings (D2C) (Mikolov et al., 2013a; Ye et al., 2017); 3 To emphasize the effect of the GAE on learning graphical information, we avoid using more advanced clustering methods, such as Deep clustering (Caron et al., 2018), which jointly learn feature representations and fine-tune the clustering performance during training.",
"While this causes the performance of our model to fall notably below the state-of-the-art, we believe this minimal approach to be an effective way to focus on the quality of the document representations as they are created by our method, and we will leave the exploration of new clustering methods for future work.",
"NMF, LDA, Latent Semantic Indexing (LSI) (Deer-wester et al., 1990), Locality Preserving Projection (LPP) (He and Niyogi, 2004; Cai et al., 2005), average of word embeddings (AvgDoc) and Paragraph Vectors (PV) (Mikolov et al., 2013b).",
"Details on their settings can be found in Ye et al. (2017).",
"In addition to the aforementioned models, we also generate document embeddings using GloVe, BERT, ELMo and SBERT.",
"Here, a document is represented as the average of the words/sentence embeddings in that document ( AvgEmb ).",
"For embeddings, we use GloVe-300d , BERT-base-uncased , ELMo-original and SBERT-bert-large-nli-stsb-mean-tokens in our experiments.",
"In all AEs, the ReLU activation function is employed in all layers.",
"Parameters of all the models are optimized using the Adam optimisation algorithm with an initial learning rate of 0 .",
"01 (Kingma and Ba, 2014).",
"We used early stopping with patience equal to 10 epochs in order to determine the best training epoch.",
"Unless specific, other hyper-parameters are kept default as provided om their corresponding studies.",
"The hyper-parameter values are shown in Table",
"2. 5.1 Results Test Performance In Table 3, we show the results 4 of our main model ( SS-SB-MT ).",
"It is created using S entence S imilarity (edge), SBERT (node) and MTGAE (autoencoder).",
"From Table 3, our model is notably better than the baseline models, which showcases the effectiveness of topological features on long-text datasets.",
"The main reasons our model performs well are twofold: first, the KCG can capture both the local and global features using text embeddings and adjacency edges ( resp. ).",
"Second, the MTGAE is able to aggregate the two sets of features by jointly reconstructing them.",
"To 4 We report the scores of cutting-edge models without any additional enhancements, such as joint training with topic modeling, to avoid any effects from them in the comparison.",
"better analyze the behaviour of our model, we experiment with different edges, node features and autoencoders individually.",
"We vary one variable at a time and keep others constant.",
"We report the results in the next section.",
"Impact of Edge Types, Node Features and Autoencoders We first analyze the performance of SS-SB-MT using different edge types 5 , and report them in Table 4 (upper rows).",
"Here, we see that the sentence-level edges perform better than the word-level edge.",
"One possible reason is that text embed-5 Currently, our model only supports encoding one edge type at a time, we leave the exploration of multi-edge GAEs for future work AE VGAE Ours Examples 0 1 1 A question in general about displaying NTSC through a Mac.",
"dings (e.g., SBERT) have already encoded the local semantic relations between adjacency words and sentences.",
"An additional word co-occurrence edge may thus be less helpful.",
"We then analyze the performance of SS-SB-MT using different text embeddings to generate node features.",
"From Table 4 (middle rows), we observe that sentence-level embeddings SBERT (i.e., SB in SS-SB-MT) consistently outperforms the other word-level embeddings (GloVe, ELMo and BERT), suggesting that it can better represent the node features in the KCG.",
"We additionally conduct an analysis on different autoencoders.",
"Results are shown in Table 4 (bottom rows).",
"While graph-level autoencoders (GAE and VGAE) generally perform better than the sequence-level one (AE), the better results come when we use MTGAE (i.e., MT in SS-SB-MT) to aggregate local and global features, indicating the important roles of both features in document clustering.",
"Qualitative Analysis of Autoencoders.",
"Table 5 showcases some prediction errors from AE and VGAE.",
"All examples describe the hardware issues specifically about Mac (i.e., comp.sys.mac.hardware ).",
"We find that VGAE performs better when the document class is determined by the entire document or a long-range semantic relation that spans over multiple sentences, rather than some local relation in consecutive keywords.",
"Example (1) contains both the hardware-related phrases (e.g., Sony monitor ), as well as the Mac-related ones (e.g., Mac ), but the whole document clearly refers to Mac if one explicitly considers the related context around the first and the last sentences; thus, an architecture likes VGAE is needed to fully utilize the semantic structures over long-sequences.",
"In contrast, AE has a competitive advantage over VGAE in modelling the local dependencies among consecutive words, as shown in example (2) .",
"Here, VGAE captures the semantic features of some key-phrases such as drive logic and heads and misclusters the example to other group that talk about general hardware issues.",
"But AE can effectively model consecutive features and capture the information about Duo Powerbooks .",
"Similar to the previous two examples, example ( 3 ) also has a mixed keywords across different sentences, but neither the local features nor the global features alone are informative enough to interpret the topic of the document: AE may capture some local key-phrases such as scanner and PC , whereas VGAE may capture the non-local relations like scanner from a PC and connecting the scanner to a Mac .",
"A scenario of this nature highlights the need for aggregating the two feature sets, and in essence, an effective model likes our MTGAE, that can exploit the synergy between them.",
"In this paper, we propose a document clustering model based on features induced unsupervisedly from a GAE and KCG.",
"Our model offers an elegant way to learn features directly from large corpora, bypassing the dependence on mature NLP pipelines.",
"Thus, it is not limited to resource-rich languages and can be used by any applications that operate on text.",
"Experiments show that our model achieves better performance than the sequence-level representations, and we conduct a series of analyses to further understand the reasons behind such a performance gain."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result"
] |
[
"Chinese short text matching usually employs word sequences rather than character sequences to get better performance.",
"However, Chinese word segmentation can be erroneous, ambiguous or inconsistent, which consequently hurts the final matching performance.",
"To address this problem, we propose neural graph matching networks, a novel sentence matching framework capable of dealing with multi-granular input information.",
"Instead of a character sequence or a single word sequence, paired word lattices formed from multiple word segmentation hypotheses are used as input and the model learns a graph representation according to an attentive graph matching mechanism.",
"Experiments on two Chinese datasets show that our models outperform the state-of-the-art short text matching models.",
"Short text matching (STM) is a fundamental task of natural language processing (NLP).",
"It is usually recognized as a paraphrase identification task or a sentence semantic matching task.",
"Given a pair of sentences, a matching model is to predict their semantic similarity.",
"It is widely used in question answer systems and dialogue systems (Gao et al., 2019; Yu et al., 2014).",
"The recent years have seen advances in deep learning methods for text matching (Mueller and Thyagarajan, 2016; Gong et al., 2017; Chen et al., 2017; Lan and Xu, 2018).",
"However, almost all of these models are initially proposed for English text matching.",
"Applying them for Chinese text matching, we have two choices.",
"One is to take Chinese characters as the input of models.",
"Another is first to segment each sentence into words, and then to take these words as input tokens.",
"Although character-based models can overcome the Kai Yu is the corresponding author.",
"However, word-based models often suffer some potential issues caused by word segmentation.",
"As shown in Figure 1, the character sequence (South Capital City Long River Big Bridge) has two different meanings with different word segmentation.",
"The first one refers to a bridge (Segment-1, Segment-2), and the other refers to a person (Segment-3).",
"The ambiguity may be eliminated with more context.",
"Additionally, the segmentation granularity of different tools is different.",
"For example, (Yangtze River Bridge) in Segment-1 is divided into two words (Yangtze River) and (Bridge) in Segment-2.",
"It has been shown that multi-granularity information is important for text matching (Lai et al., 2019).",
"Here we propose a neural graph matching method (GMN) for Chinese short text matching.",
"Instead of segmenting each sentence into a word sequence, we keep all possible segmentation paths to form a word lattice graph, as shown in Figure 1.",
"GMN takes a pair of word lattice graphs as input and updates the representations of nodes according to the graph matching attention mechanism.",
"Also, GMN can be combined with pre-trained language models, e.g. BERT (Devlin et al., 2019).",
"It can be regarded as a method to integrate word information in these pre-trained language models during the fine-tuning phase.",
"The experiments on two Chinese Datasets show that our model outperforms not only previous state-of-the-art models but also the pre-trained model BERT as well as some variants of BERT.",
"First, we define the Chinese short text matching task in a formal way.",
"Given two Chinese sentences S a = { c a 1 , c a 2 , , c at a } and S b = { c b 1 , c b 2 , , c bt b } , the goal of a text matching model f ( S a , S b ) is to predict whether the semantic meaning of S a and S b is equal.",
"Here, c ai and c bj represent the i -th and j -th Chinese character in the sentences respectively, and t a and t b denote the number of characters in the sentences.",
"In this paper, we propose a graph-based matching model.",
"Instead of segmenting each sentence into a word sequence, we keep all possible segmentation paths to form a word lattice graph G = ( V , E ) .",
"V is the set of nodes and includes all character subsequences that match words in a lexicon D .",
"E is the set of edges.",
"If a node v i V is adjacent to another node v j V in the original sentence, then there is an edge e ij between them.",
"N fw ( v i ) denotes the set of all reachable nodes of node v i in its forward direction, while N bw ( v i ) denotes the set of all reachable nodes of node v i in its backward direction.",
"With two graphs G a = ( V a , E a ) and G b = ( V b , E b ) , our graph matching model is to predict their similarity, which indicates whether the original sentences S a and S b have the same meaning or not.",
"As shown in Figure 2, our model consists of three components: a contextual node embedding module (BERT), a graph matching module, and a relation classifier.",
"For each node v i in graphs, its initial node embedding is the attentive pooling of contextual character representations.",
"Concretely, we first concat the original character-level sentences to form a new sequence S = Figure 2: Overview of our proposed framework { [ CLS ] , c a 1 , , c at a , [ SEP ] , c b 1 , , c bt b , [ SEP ] } , and then feed them to the BERT model to obtain the contextual representations for each charater c CLS , c a 1 , , c at a , c SEP , c b 1 , , c bt b , c SEP .",
"Assuming that the node v i consists of n i consecutive character tokens { c s i , c s i +1 , , c s i + n i 1 } 1 , a feature-wised score vector u s i + k is calculated with a feed forward network (FNN) with two layers for each character c s i + k , i.e. u s i + k = FFN ( c s i + k ) , and then normalized with feature-wised multidimensional softmax.",
"The corresponding character embedding c s i + k is weighted with the normalised scores u s i + k to obtain the initial node embedding v i = (cid:80) n 1 k =0 u s i + k (cid:12) c s i + k , where (cid:12) represents element-wise product of two vectors.",
"Our proposed neural graph matching module is based on graph neural networks (GNNs) (Scarselli et al., 2009).",
"GNNs are widely applied in various NLP tasks, such as text classification (Yao et al., 2019), machine translation (Marcheggiani et al., 2018), Chinese word segmentation (Yang et al., 2019), Chinese named entity recognition (Zhang 1 Here s i denotes the index of the first character of v i in the sentence S a or S b . For brevity, the superscript of c s i + k is omitted. and Yang, 2018), dialogue policy optimization (Chen et al., 2018c, 2019, 2018b), and dialogue state tracking (Chen et al., 2020; Zhu et al., 2020), etc.",
"To the best of our knowledge, we are the first to introduce GNN in Chinese shot text matching.",
"The neural graph matching module takes the contextual node embedding v i as the initial representation h 0 i for the node v i , then updates its representation from one step (or layer) to the next with two sub-steps: message propagation and representation updating.",
"Without loss of generality, we will use nodes in G a to describe the update process of node representations, and the update process for nodes in G b is similar.",
"Message Propagation At l -th step, each node v i in G a will not only aggregate messages m fwi and m bwi from its reachable nodes in two directions: m fwi = (cid:88) v j N fw ( v i ) ij (cid:16) W fw h l 1 j (cid:17) , m bwi = (cid:88) v k N bw ( v i ) ik (cid:16) W bw h l 1 k (cid:17) , (1) but also aggregate messages m b 1 i and m b 2 i from all nodes in graph G b , m b 1 i = (cid:88) v m V b im (cid:16) W fw h l 1 m (cid:17) , m b 2 i = (cid:88) v q V b iq (cid:16) W bw h l 1 q (cid:17) .",
"(2) Here ij , ik , im and iq are attention coefficients (Vaswani et al., 2017).",
"The parameters W fw and W bw as well as the parameters for attention coefficients are shared in Eq.",
"(1) and Eq.",
"(2).",
"We define m selfi (cid:44) [ m fwi , m bwi ] and m crossi (cid:44) [ m b 1 i , m b 2 i ] .",
"With this sharing mechanism, the model has a nice property that, when the two graphs are perfectly matched, we have m selfi m crossi .",
"The reason why they are not exactly equal is that the node v i can only aggregate messages from its reachable nodes in graph G a , while it can aggregate messages from all nodes in G b .",
"Representation Updating After aggregating messages, each node v i will update its representation from h l 1 i to h li .",
"Here we first compare two messages m selfi and m cross i with multi-perspective cosine distance (Wang et al., 2017), d k = cosine (cid:16) w cosk (cid:12) m selfi , w cosk (cid:12) m crossi (cid:17) , (3) Dataset Size Pos:Neg Domain BQ 120,000 1:1 bank LCQMC 260,068 1.3:1 open-domain Table 1: Features of two datasets BQ and LCQMC where k { 1 , 2 , , P } ( P is number of perspec-tives).",
"w cosk is a parameter vector, which assigns different weights to different dimensions of messages.",
"With P distances d 1 , d 2 , , d P , we update the representation of v i , h li = FFN (cid:16)(cid:104) m selfi , d i (cid:105)(cid:17) , (4) where [ , ] denotes the concatation of two vectors, d i (cid:44) [ d 1 , d 2 , , d P ] .",
"FFN is a feed forward network with two layers.",
"After updating node representation L steps, we will obtain the graph-aware representation h Li for each node v i .",
"h Li includes not only the information from its reachable nodes but also information of pairwise comparison with all nodes in another graph.",
"The graph level representations g a and g b for two graphs G a and G b are computed by attentive pooling of representations of all nodes in each graph.",
"With two graph level representations g a and g we can predict the similarity of two graphs or sentences,",
"Dataset We conduct experiments on two Chinese datasets for semantic textual similarity: LCQMC (Liu et al., 2018) and BQ (Chen et al., 2018a).",
"LCQMC is a large-scale open-domain corpus for question matching, while BQ is a domain-specific corpus for bank question matching.",
"The sample in both datasets contains a pair of sentences and a binary label indicating whether the two sentences have the same meaning or share the same intention.",
"All features of the two datasets are summarized in Table 1.",
"For each dataset, the accuracy (ACC) and F1 score are used as the evaluation metrics.",
"Hyper-parameters The number of graph updating steps/layers L is 2 on both datasets.",
"The dimension of node representation is 128.",
"The dropout rate for all hidden layers is 0.2.",
"The number of matching perspectives P is 20.",
"Each model is trained by RMSProp with an initial learning rate of 0.0001 and a batch size of 32.",
"We use the vocabulary provided by Song et al. (2018) to build the lattice.",
"We compare our models with two types of baselines: basic neural models without pre-training and BERT-based models pre-trained on large-scale corpora.",
"The basic neural approaches also can be divided into two groups: representation-based models and interaction-based models.",
"The representation-based models calculate the sentence representations independently and use the distance as the similarity score.",
"Such models include Text-CNN (Kim, 2014), BiLSTM (Graves and Schmid-huber, 2005) and Lattice-CNN (Lai et al., 2019).",
"Note that Lattice-CNN also takes word lattices as input.",
"The interaction-based models consider the interaction between two sentences when calculating sentence representations, which include BiMPM (Wang et al., 2017) and ESIM (Chen et al., 2017).",
"ESIM has achieved state-of-the-art results on various matching tasks (Bowman et al., 2015; Chen and Wang, 2019; Williams et al., 2018).",
"For pre-trained models, we consider BERT and its several variants such as BERT-wmm (Cui et al., 2019), BERT-wmm-ext (Cui et al., 2019) and ERNIE (Sun et al., 2019; Cui et al., 2019).",
"One common feature of these variants of BERT is that they all use word information during the pre-trained phase.",
"We use 84.20 84.32 84.59 84.60 84.00 84.10 84.20 84.30 84.40 84.50 84.60 84.70 PKU JIEBA JIEBA+PKU LATTICE Figure 3: Performance (ACC) of GMN with different inputs on LCQMC dataset GMN-BERT to denote our proposed model.",
"We also employ a character-level transformer encoder instead of BERT as the contextual node embedding module described in Section 3.1, which is denoted as GMN.",
"The comparison results are reported in Table 2.",
"From the first part of the results, we can find that our GMN performs better than five baselines on both datasets.",
"Also, the interaction-based models in general outperform the representation based models.",
"Although Lattice-CNN 2 also utilizes word lattices, it has no node-level comparison due to the limits of its structure, which causes signifi-cant performance degradation.",
"As for interaction-based models, although they both use the multi-perspective matching mechanism, GMN outperforms BiMPM and ESIM (char and word) 3 , which indicates that the utilization of word lattice with our neural graph matching networks is powerful.",
"From the second part of Table 2, we can find that the three variants of BERT (BERT-wwm, BERT-wwn-ext, ERNIE) 4 all outperform the original BERT, which indicates using word-level information during pre-training is important for Chinese matching tasks.",
"Our model GMN-BERT performs better than all these BERT-based models.",
"It shows that utilizing word information during the fine-tuning phase with GMN is an effective way to boost the performance of BERT for Chinese semantic matching.",
"2 The results of Lattice-CNN is produced by the open source code https://github.com/Erutan-pku/LCN-for-ChineseQA.",
"A word sequence can be regarded as a thin graph.",
"Therefore, it can be used to replace the word lattice as the input of GMN.",
"As shown in Figure 3, we compare four models: Lattice is our GMN with word lattice as the input.",
"PKU and JIEBA are similar to Lattice except that their input is word sequence produced by two word segmentation tools: Jieba 5 and pkuseg (Luo et al., 2019), while the input of JIEBA+PKU is a small lattice graph generated by merging two word segmentation results.",
"We can find that lattice-based models ( Lattice and JIEBA+PKU ) performs much better then word-based models ( PKU and JIEBA ).",
"We can also find that the performance of PKU+JIEBA is very close to the performance of Lattice .",
"The union of different word segmentation results can be regarded as a tiny lattice, which is usually the sub-graph of the overall lattice.",
"Compared with the tiny graph, the overall lattice has more noisy nodes (i.e. invalid words in the corresponding sentence).",
"Therefore We think it is reasonable that the performance of tiny lattice ( PKU+JIEBA ) is comparable to the performance of the overall lattice ( Lattice ).",
"On 5 https://github.com/fxsjy/jieba the other hand, this indicates that our model has the ability to deal with the introduced noisy information in the lattice graph.",
"In Figure 4, we give two examples to show that word segmentation errors result in incorrect prediction of JIEBA , while Lattice can give the right answers.",
"In this paper, we propose a neural graph matching model for Chinese short text matching.",
"It takes a pair of word lattices as input instead of word or character sequences.",
"The utilization of word lattice can provide more multi-granularity information and avoid the error propagation issue of word segmentation.",
"Additionally, our model and the pre-training model are complementary.",
"It can be regarded as a flexible method to introduce word information into BERT during the fine-tuning phase.",
"The experimental results show that our model outperforms the state-of-the-art text matching models as well as some BERT-based models.",
"This work has been supported by the National Key Research and Development Program of China (Grant No. 2017YFB1002102) and Shanghai Jiao Tong University Scientific and Technological Innovation Funds (YG2020YQ01)."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other"
] |
[
"Scholars in inter-disciplinary fields like the Digital Humanities are increasingly interested in semantic annotation of specialized corpora.",
"Yet, under-resourced languages, imperfect or noisily structured data, and user-specific classification tasks make it difficult to meet their needs using off-the-shelf models.",
"Manual annotation of large corpora from scratch, meanwhile, can be prohibitively expensive.",
"Thus, we propose an active learning solution for named entity recognition, attempting to maximize a custom model's improvement per additional unit of manual annotation.",
"Our system robustly handles any domain or user-defined label set and requires no external resources, enabling quality named entity recognition for Humanities corpora where such resources are not available.",
"Evaluating on typologically disparate languages and datasets, we reduce required annotation by 20-60% and greatly outperform a competitive active learning baseline.",
"Reaping the benefits of recent advances in Named Entity Recognition (NER) is challenging when dealing with under-resourced languages, niche domains, imperfect or noisily structured data, or user-specific classification tasks.",
"Scholars in interdisciplinary fields like the Digital Humanities (DH) are increasingly interested in semantic annotation of specialized corpora that invoke many of these challenges.",
"Thus, such corpora cannot easily be annotated automatically with blackbox, off-the-shelf NER models.",
"Manual annotation of large corpora from scratch, meanwhile, can be prohibitively costly.",
"Successful DH initiatives like the Pelagios Commons (Simon et al., 2016), which collects geospatial data from historical sources, often require extensive funding, relying on considerable manual annotation (Simon et al., 2017).",
"To this end, we introduce the Humanities Entity Recognizer ( HER ), 1 a whitebox toolkit for build-your-own NER models, freely available for public use.",
"HER robustly handles any domain and user-defined label set, guiding users through an active learning process whereby sentences are chosen for manual annotation that are maximally informative to the model.",
"Informativeness is determined based on novel interpretations of the uncertainty , representativeness , and diversity criteria proposed by Shen et al. (2004).",
"In contrast to literature emphasizing the disproportionate or exclusive importance of uncertainty (Shen et al., 2017; Zhu et al., 2008; Olsson, 2009), we observe significant improvements by integrating all three criteria.",
"In addition to a robust active learning based NER toolkit, we also contribute a novel evaluation framework.",
"This inclusive framework considers the accuracy with which an entire corpus is annotated, regardless of which instances are annotated manually versus automatically, such that no instance is held out when the active learning algorithm considers candidates for annotation.",
"The standard, exclusive evaluation framework, by contrast, only measures the accuracy of the final trained model's predictions on a held out test set.",
"Thus, the inclusive framework is relevant to the user who wants to annotate a finite corpus as fast and as accurately as possible by any means necessary, whereas the exclusive framework is relevant to the user who wants to build an NER tool that can generalize well to other corpora.",
"We conduct extensive experiments comparing several combinations of active learning algorithms and NER model architectures in both frameworks across many typologically diverse languages and domains.",
"The systematic differences between inclusive and exclusive results demonstrate that while deep NER model architectures 1 github.com/alexerdmann/HER .",
"(Lample et al., 2016) are highly preferable for tagging held out sentences, shallow models (Laf-ferty et al., 2001) perform better on sentences that could have been chosen for manual annotation but were not selected by the active learning algorithm.",
"We argue for the importance of considering both frameworks when evaluating an active learning approach, as the intended application determines which framework is more relevant and thus, which model should be employed.",
"Controlling for the NER model, HER 's active learning sentence ranking component achieves significant improvement over a competitive baseline (Shen et al., 2017).",
"Because HER does not reference the inference model during sentence ranking, this provides counter evidence to Lowell et al. (2018)'s hypothesis that non-native active learning is suboptimal.",
"The best known NER systems among humanists are Stanford NER (Finkel et al., 2005), with pretrained models in several languages and an interface for building new models, and among researchers interested in NER for spatial research, the Edinburgh Geoparser (Grover et al., 2010), with fine grained NER for English.",
"Erdmann et al. (2016) and Sprugnoli (2018), among others, have shown that such off-the-shelf models can be substantially improved on DH-relevant data.",
"Work such as Smith and Crane (2001) and Simon et al. (2016) represent a large community mining such data for geospatial entities.",
"Additional DH work on NER concerns the impact of input data structure and noisy optical character recognition (Van Hooland et al., 2013; Kettunen et al., 2017).",
"Low Resource NER Language agnostic NER is highly desirable, yet limited by the data available in the least resourced languages.",
"Curran and Clark (2003) demonstrate that careful feature engineering can be typologically robust, though data hungry neural architectures have achieved state-of-the-art performance without feature engineering (Lample et al., 2016).",
"To enable neural architectures in low resource environments, many approaches leverage external resources (Al-Rfou et al., 2015).",
"Cotterell and Duh (2017), for instance, harvest silver annotations from structured Wikipedia data and build models for typologically diverse languages, though their approach is limited to specific domains and label sets.",
"Lin and Lu (2018) adapt well-resourced NER systems to low resource target domains, given minimal annotation and word embeddings in domain.",
"Several translation-based approaches leverage better resourced languages by inducing lexical information from multi-lingual resources (Bharadwaj et al., 2016; Nguyen and Chiang, 2017; Xie et al., 2018).",
"In a slightly different vein, Shang et al. (2018) use dictionaries as distant supervision to resolve entity ambiguity.",
"Unfortunately, external resources are not always publicly available.",
"It is in fact impossible to replicate many of the above studies without a government contract and extensive knowledge of linguistic resources, limiting their applicability to many DH scenarios.",
"Mayhew et al. (2017) suggest manually building bilingual dictionaries when no other translation resources are available to facilitate their method, though active learning provides a more direct means of improving NER quality.",
"Active Learning Active learning seeks to maximize the performance of a model while minimizing the manual annotation required to train it.",
"Shen et al. (2004) define three broad criteria for determining which data will be most informative to the model if annotated: uncertainty , where instances which confuse the model are given priority; diversity , where instances that would expand the model's coverage are prioritized; and representativeness , prioritizing instances that best approximate the true distribution over all instances.",
"Uncertainty-based approaches outperform other single-criterion approaches, though many works, primarily in Computer Vision, demonstrate that considering diversity reduces repetitive training examples and representativeness reduces outlier sampling (Roy and McCallum, 2001; Zhu et al., 2003; Settles and Craven, 2008; Zhu et al., 2008; Olsson, 2009; Gu et al., 2014; He et al., 2014; Yang et al., 2015; Wang et al., 2018b).",
"For active learning in NER, Shen et al. (2017) propose the uncertainty-based metric maximized normalized log-probability ( MNLP ).",
"It prioritizes sentences based on the length normalized log probability of the model's predicted label sequence.",
"To make neural active learning tractable, they shift workload to lighter convolutional neural networks (CNN) and update weights after each manual annotation batch instead of retraining from scratch.",
"They demonstrate state-of-the-art performance with MNLP , though Lowell et al. (2018) show its improvement above random sampling to be less dramatic, as do our experiments.",
"Low-Figure 1: High level HER system architecture.",
"Unlabeled sentences in U are manually labeled and moved to L , enabling iterative updates of gazetteers, the NER model, and the informativity ranking of sentences in U .",
"ell et al. (2018) compare calculating MNLP from the native inference model and from a non-native model with a separate architecture.",
"They conclude that non-native models are ill-suited to active learning, which our findings using more robust informativeness criteria contradict.",
"As illustrated in Figure 1, HER consists of three components: (1) a human User who provides an unlabeled corpus U at state 0 and annotates selected sentences in state 1, thus moving them from U to the labeled corpus L , (2) an active learning engine, Ranker , that ranks sentences from U in state 2 for User to annotate based on how informative they might be to (3), the NER model, T agger , to be trained on L in state 3.",
"2 All states are linked by an interface that white-boxes the process.",
"It advises User on qualitative observations which might improve performance by manually interacting with Ranker , e.g., removing problematic gazetteer entries, or with T agger , e.g., forcing it to sample some known minority labels.",
"The contributions of the interface will not be reflected in our human-out-of-the-loop experiments on standard datasets, as these evaluate only the contributions of Ranker and T agger .",
"Thus, reported performances should be considered a lower bound that can often be improved with minimal human intervention.",
"At state 1 with i =0 , User is prompted to annotate randomly ordered sentences until 50-100 named entities are labeled.",
"We use a 200-sentence seed for all experiments except that of Section 4.1, where an entity-sparse corpus requires a 300-sentence seed.",
"Such a small seed, often annotated in less than 30 minutes, is sufficient to support Ranker 's Pre-Tag DeLex ( PTDL ) algorithm in state 2.",
"PTDL uses only shallow, fast Conditional Random Fields (CRF) to avoid delaying manual annotation.",
"As demonstrated on a sample corpus in Figure 2, PTDL involves four subtasks: pre-tagging , delexicalized tagging , vocabulary weighting and sentence ranking .",
"Pre-tagging We naively and greedily pre-tag U with binary entitynon-entity labels based on gazetteer matches.",
"Hence, every n-gram in U matching a named entity from a gazetteer gets pre-tagged as such unless it overlaps with a longer named entity.",
"State 2 cannot occur until the seed has been annotated, so Gaz will never be empty, as entities are automatically extracted into gazetteers after each annotation batch in state 1.",
"Delexicalized Tagging U pre tagged is divided into UNE , containing sentences with at least one pre-tagged named entity, and its complement, U noNE .",
"We train a trusted NER model ( t ) on L and two biased models ( b 1 and b 2 ) on L plus random mutually exclusive halves of UNE .",
"b 1 and b 2 are biased in that they use non-gold data ( UNE ) exhibiting an exaggerated density of named entities.",
"Models are trained using only delexicalized features, which, unlike character n-grams for example, do not directly reference the focal word or its form.",
"Many delexicalized features are context-based, like preceding and following words.",
"Trained thus, models are less hampered by the class imbalance problem (Japkowicz, 2000), more likely to predict more named entities, and more capable of determining which Out Of Vocabulary (OOV) lexical items (in U but not L ) make good named entity candidates.",
"Vocabulary Weighting After tagging U with delexicalized models, t , b 1 , and b 2 , OOVs are scored by weighted frequency.",
"Weights are sums determined by which models tagged the OOV in an entity at least once.",
"t contributes 1 to the sum and each biased model, 12 .",
"OOVs not tagged by any model recieve a negligible positive weight, (cid:15) .",
"This motivates PTDL to target frequent OOVs after exhausting OOVs more likely to be named entities.",
"Sentence Ranking As shown in Figure 2, sentences in U are ranked by the sum of scores of unique OOVs therein, normalized by the word length of the sentence.",
"OOVs occurring in higher ranked sentences do not count toward this sum.",
"While typical active learning strategies for NER rely on the inference model's output probabilities, these are noisy, especially given scarce annotation.",
"Data-scarce models lexically memorize training instances, yielding high precision at the expense of recall.",
"They struggle to model non-lexical features more subtly correlated with entity status but also more likely to occur on OOVs.",
"Hence, data-scarce models know what they know but are somewhat equally perplexed by everything else (Li et al., 2008).",
"For this reason, uncertainty-based active learners can suffer from problematically weak discriminative power in addition to redundant and outlier-prone sampling.",
"By forcing reliance on delexicalized features and biasing models toward recall, our three-criteria approach identifies frequent (representa-tiveness) OOV words (diversity) that are plausible candidate members of named entities.",
"These make for better indicators of where the model may fail (uncertainty) because named entities are minority labels in NER and minority labels are challenging.",
"User can stop iteratively annotating and re-ranking U at any time to train a T agger on L to perform the full NER task on U (state 3).",
"L is combined with T agger 's predictions on U ( P red ) to form P redL , from which an imperfect gazetteer is extracted ( P redGaz ).",
"User must inspect these to determine if additional annotation is required.",
"We explore three T agger architectures: CRF For tagging with Okazaki (2007)'s feature-based CRF, T agger first trains preliminary models on L , cross-validating on folds of the random seed.",
"Each model leverages a unique permutation drawn from a universal set of features.",
"The best performing feature set is used to train the final model.",
"Training and inference are fast, even with preliminary cross-validation.",
"In the exclusive evaluation, CRF is the best tagger until about 40K tokens of training data are acquired.",
"In the inclusive evaluation, CRF's tendency to overfit is rewarded, as it outperforms both deep models regardless of corpus size.",
"CNN-BiLSTM The near state-of-the-art architecture proposed by Shen et al. (2017) aims to reduce training with minimal harm to accuracy.",
"It leverages CNNsas opposed to slower recurrent networksfor character and word encoding, and a bidirectional long short-term memory network (BiLSTM) for tags.",
"CNN-BiLSTM outperforms all other models in the exclusive evaluation for a stretch of the learning curve between about 40K tokens acquired and 125K.",
"While faster than the other deep model considered here, training time is slower than the CRF and computationally costly.",
"BiLSTM-CRF The state-of-the-art BiLSTM-CRF architecture of (Lample et al., 2016) projects a sequence of word embeddings to a character level BiLSTM which in turn projects into a CRF at the tag level, with an additional hidden layer between the BiLSTM and CRF.",
"In our experiments, BiLSTM-CRF surpasses CNN-BiLSTM performance once about 125K tokens are acquired.",
"HER was developed to benefit diverse DH projects.",
"It is currently facilitating three such ventures.",
"The Herodotos Project ( u.osu.edu/ herodotos ) aims at cataloguing ancient ethnogroups and their interactions (Boeten, 2015; de Naegel, 2015).",
"HER is used to identify such groups in Classical Greek and Latin texts.",
"Manually annotated data as well as a trained NER tagger are freely available from github.com/alexerdmann/Herodotos-Project-Latin-NER-TaggerAnnotation .",
"Artl@s artlas.huma-num.fr is a global database of art historical catalogs from the 19th and 20th centuries for the scholarly study of the diffusion and globalization of art.",
"HER serves as a method for mining semi-structured texts characterized by noisy OCR and recurrent patterns of granular named entities.",
"Visualizing Medieval Places Wrisley (2017) concerns the study of recurrent places found within a mixed-genre corpus of digitized medieval French texts.",
"NER has heretofore been challenged by sparsity from the unstandardized orthography.",
"The related Open Medieval French project ( github.com/OpenMedFr ) benefits from HER 's robust handling of sparsity, using the system to create open data regarding people and places referenced in medieval French texts.",
"We now describe several experiments evaluating HER 's performance on diverse corpora.",
"When a standard test set is available, we perform inclusive evaluation on the combined train and dev sets and evaluate exclusively on test.",
"Otherwise, we only evaluate inclusively.",
"In both settings, we compare multiple combinations of ranking systems and taggers over a learning curve, reporting F1 exact match accuracy of identified entities.",
"In all fig-ures, line dashing (contiguous, dashed, or dotted) denotes inference model (CRF, BiLSTM-CRF, or CNN-BiLSTM), whereas line accents (stars, circles, triangles, or squares) denotes active learning method.",
"Besides PTDL , we also consider a random active learning method ( RAND ), MNLP , and Erdmann et al. (2016)'s CAP algorithm.",
"Like PTDL , CAP ranks sentences based on frequency weighted OOVs, but calculates weights based on capitalization patterns, prioritizing capitalized OOVs occurring in non-sentence initial position.",
"Quantity of training data is reported as percentage of the entire corpus for inclusive evaluations, and as tokens actively annotated (i.e., not counting the random seed sentences) for exclusive evaluations.",
"For consistency, following seed annotation, we always fetch additional annotation batches at the following intervals, in tokens: 1K, 4K, 5K, 10K, 20K until we reach 100K total tokens, 50K until 300K total, 100K until 500K total, and 250K from there.",
"For all experiments leveraging neural taggers, we use freely available pretrained embeddings (Grave et al., 2018), except for Latin, where we train fasttext (Bojanowski et al., 2017) embeddings on the Perseus (Smith et al., 2000) and Latin Library collections with default parameters (using pretrained embeddings yield small performance boosts that decrease with additional training data).",
"We conclude this section with a direct comparison to the recently proposed active learning pipeline of Shen et al. (2017) and their MNLP ranking algorithm.",
"Because the active learning pipeline involves taking a random seed and many of the experiments on larger corpora could not be averaged over several runs, we first measure performance variation as a function of ranking algorithm and quantity of annotation.",
"Figure 3 displays our findings on a sample corpus of about 250K tokens 3 in five diverse, pre-1920 prose genres extracted from the FranText corpus ( www.frantext.fr ) and annotated for 3 HER considers sentence boundaries to be tokens, as this helps users locate words, i.e., the line number will correspond to token number when rendered in CoNLL format.",
"geospatial entities.",
"Our sample covers topics from gastronomy to travel, exhibiting inconsistent entity density characteristic of DH corpora.",
"Noise is much higher for the first few batches of annotation, particularly due to the low recall of data scarce models.",
"Reluctant to generalize, they behave more like look-up tables extracted from the seed, exacerbating the effect of random seed variation.",
"After about 20K tokens annotated or 10% of the corpus, performance becomes much more predictable.",
"All algorithms start with about a 5 point spread for 1 standard deviation, with means around 70 F, and all exhibit the diminishing variation trend, though RAND does less so.",
"Unlike CAP and PTDL , subsequent annotation batches in RAND are not predictable from previous annotation batches.",
"This results in a spread of 0.76 after annotating 12.33% of the corpus, whereas the other algorithms are close to 0.4.",
"While we only tested variation on one corpus, multiple runs on other corpora tended to reflect the same diminishing variation trends despite marked differences in entity granularity, density or corpus size.",
"Switching to the exclusive evaluation only minimally increases variation.",
"It was not feasible to rigorously test variation using neural taggers, though we note that they are somewhat more prone to seed related noise which does not diminish as rapidly as it does for CRF with more annotation.",
"In terms of performance, random annotation requires one to label between 23% and 31% of the corpus to achieve the performance of PTDL after labeling just 12%.",
"For this corpus, PTDL reduces annotation time between 46% and 60%, requiring only 32K tokens from annotators instead of 60-80K.",
"CAP 's competitiveness with PTDL is not surprising given that French uses the capitalization standards it is designed to exploit.",
"Both algorithms achieve 15% error reduction above RAND after the first post-seed annotation batch (left edge of Figure 3), increasing monotonically to 55% error reduction after the fifth batch (right edge).",
"Using the Spanish CoNLL corpus (Tjong Kim Sang and De Meulder, 2003) with canonical train, dev, test splits, we examine the relationship between evaluation framework, Ranker , and T agger in Figure 4.",
"4 For the inclusive framework, Ranker selects sentences from train+dev for T agger to train on, and the performance is calculated over the combination of those se-4 Lample et al. (2016) achieve 85.75 F on the exclusive evaluation, slightly beating our best BiLSTM-CRF models which sacrifice some performance for speed, switching to Adam optimization limited to 5 epochs.",
"lected sentences (trivially all correct) and trained T agger 's predictions on the rest of train+dev.",
"By reporting results over a learning curve, this evaluation framework is meaningful to the user whose primary goal is to produce a finite annotated corpus as efficiently and accurately as possible, a frequent concern in DH.",
"The standard exclusive framework also gives Ranker access to train+dev sentences, but calculates accuracy from T agger 's predictions on the held out test set.",
"The exclusive framework is thus more meaningful for future users of T agger who need the tool to generalize to sentences outside of train+dev.",
"In the inclusive framework, regardless of corpus size, BiLSTM-CRFs do not surpass CRFs until the accuracy is so high that the distinction is negligible.",
"Promoting overfitting by reducing dropout did not significantly affect this result.",
"In the exclusive framework, BiLSTM-CRF surpasses CRF around 50K tokens annotated.",
"This holds for all languages and corpora we investigate, suggesting quantity of data annotated is more predictive of exclusive performance trends, whereas proportion of the corpus annotated better predicts inclusive trends.",
"We consider the effect of language typology, label scheme granularity, and corpus size on inclusive and exclusive evaluations of taggers and rankers.",
"We repeat our experiments from Section 4.2 on the German NER corpus, GermEval (Benikova et al., 2014), to determine how robust our findings are to a larger corpus with finer label granularity and different capitalization standards.",
"Our results in Figure 5 confirm many of our previous findings, with BiLSTM-CRFs overtaking CRFs of the same ranker after 50K tokens annotated on the exclusive evaluation.",
"Shallow CRFs again dominate inclusively, and again, exclusive performance is less predictable, though the contribution of PTDL is more obvious.",
"GermEval contains over 520K tokens to Spanish CoNLL's 321K, showing that deep models are not just slower to overtake shallow models in the inclusive evaluation, but they only asymptotically approach shallow performance.",
"5 Furthermore, the finer grained named entity distinctions 5 Our evaluation is equivalent to metric 3 from the shared task (Benikova et al., 2014), though our results are not comparable as we did not leverage nested labels.",
"in GermEval do not seem to affect our previous findings, but do cause BiLSTM-CRF to start slowly, as the model does not begin training until all possible labels manifest in the training set.",
"While this is merely an effect of programming choices, it provides interesting insights.",
"For instance, BiLSTM-CRF CAP models consistently start later than RAND which starts later than PTDL , meaning that PTDL is doing well on the diversity criteria, whereas CAP likely struggles because it relies on English-like capitalization standards.",
"Since German capitalizes all nouns, CAP struggles here, having to search through many capitalized OOVs before finding named entities of each category.",
"By not considering uncapitalized OOVs as named entity candidates, it can systematically avoid entire labels which do not take capitalization, such as dates.",
"Thus, while PTDL performs robustly on the GermEval dataset, CAP is only weakly superior to RAND due to the weak correlation between entity status and capitalization.",
"4.3.2 Insights from Latin Latin presents an opportunity to explore the impact of capitalization on ranking algorithms more thoroughly.",
"Erdmann et al. (2016) selected their Latin corpus because English capitalization standards had been imposed during digitization, making CAP more likely to succeed.",
"Figure 6 demonstrates that it even marginally outperforms PTDL on the corpus (left pane).",
"However, capitalizing proper nouns is not a native attribute of Latin orthography and is not available in all digitized manuscripts, limiting the Latin texts in which CAP will succeed.",
"The right pane in Figure 6 demonstrates this, as the same evaluation from the left pane is repeated on a lower cased version of the same corpus.",
"The minuscule error reduction CAP achieves over RAND in this environment is due to its general preference for OOVs.",
"Meanwhile, despite suffering from weaker named entity signals without capitalization, PTDL still manages to robustly identify what non-capitalization features are relevant, maintaining 25% error reduction over RAND .",
"Finally, in German, where capitalization is a weak signal of entity status, PTDL is similarly better equipped to incorporate the weak signal, reducing error twice as much as CAP .",
"Interestingly, PTDL performance in the lower cased Latin corpus almost exactly matches RAND performance on the capitalized version.",
"This suggests the benefits of PTDL are comparable to the benefits of having English-like capitalization to mark entities.",
"Unlike the other corpora, the news domain ANER Arabic corpus (Benajiba and Rosso, 2007) features rich templatic morphology, frequent lexical ambiguity, and an orthography lacking capitalization.",
"Hence, not only will feature-based signals be more subtle, but the gazetteer-based pre-tagging component of PTDL will suffer from low precision, because Arabic is written in an abjad orthography where short vowels among other characters are seldom marked, making many words polysemous.",
"Even so, PTDL significantly outperforms RAND , likely due to its ability to shift reliance to contextual features better suited for newswire, where formulaic expressions are often used to refer to certain entity types.",
"While PTDL compares well to RAND , it does not approach 100% accuracy after annotating 50% of the corpus as in Section 4.3.2.",
"Besides ambiguity and lack of capitalization, this could be due to a typological bias in our universal feature set.",
"Contiguous character n-grams, for example, will not capture non-concatenative subword phenomena.",
"Going forward, we will investigate which feature sets were most useful as a function of language typology to identify gaps in our coverage.",
"Shen et al. (2017) and Lowell et al. (2018) evaluate the purely uncertainty-based MNLP active NER system on English corpora, reporting starkly different results.",
"We address discrepancies and test the robustness of their findings by comparing MNLP to PTDL and RAND on the GermEval corpus.",
"Results are displayed in Figure 8, with shaded regions corresponding to the range of performance over multiple runs.",
"To compare fairly, we use the same CNN-BiLSTM tagger for all rankers and iteratively update weights instead of re-training from scratch after each annotation batch, as in Shen et al. (2017).",
"We report results on our previously mentioned batch annotation schedule, though results were comparable using the batch schedule of Lowell et al. (2018).",
"Shen et al. (2017) claim iterative updating does not affect accuracy significantly, though the best performing active CNN-BiLSTM in Figure 8 lags a few points behind the BiLSTM-CRF after 150K tokens annotated, with that gap reaching nearly 5 F when training on the whole corpus.",
"Meanwhile, a CNN-BiLSTM trained from scratch on the whole corpus scores only 1 F less than the BiLSTM-CRF.",
"While Lowell et al. (2018) report improvement over RAND using MNLP when training on 0-10% of the corpus, we see little improvement after about 2%, and even then, PTDL greatly outperforms both.",
"The relationship between the PTDL curves in the exclusive evaluation shows that CNN-BiLSTM is actually the optimal tagging architecture for a brief window, overtaking CRF around 30K tokens and staying in front of BiLSTM-CRF until about 125K tokens.",
"We have presented the HER toolkit and its novel active learning algorithm, demonstrating robust handling of typological diversity, niche domains, and minority labels.",
"The algorithm addresses the weak discriminative power of uncertainty-based models caused by class imbalance and precision bias.",
"We also argued for the relevance of in-Figure 6: Percent error reduction over RAND in three corpora exhibiting typologically distinct capitalization standards.",
"Corpora are presented in descending order of the correlation of capitalization with named entity status.",
"clusive evaluations, demonstrating that a shallow CRF tagger outperforms deep taggers on sentences which the active learner could have selected for training.",
"The CRF's tendency to overfit is rewarded in this case, as the selected sentences are especially representative of the remaining sentences to be tagged.",
"When tagging held out test sets, CRFs are only optimal until about 30K training tokens are acquired, then CNN-BiLSTMs are preferable until 125K tokens when BiLSTM-CRFs become the best high resourced tagger.",
"In future work, we will investigate sources of noise in performance to see if these are due to gaps in the model, idiosyncrasies of corpora, or both.",
"Additionally, we will expand HER to model hierarchically nested entity labels.",
"Named entities are often difficult to label deterministically, inviting a problematic level of subjectivity, which is of crucial interest in DH and should not be over-simplified.",
"We will consider strategies such as Wang et al. (2018a)'s shift-reduced-based LSTM architecture or Sohrab and Miwa (2018)'s method of modeling the contexts of overlapping potential named entity spans with bidirectional LSTM's.",
"We thank the Herodotos Project annotators for their contributions: Petra Ajaka, William Little, Andrew Kessler, Colleen Kron, and James Wolfe.",
"Furthermore, we gratefully acknowledge support from the New York UniversityParis Sciences Let-tres Spatial Humanities Partnership, the Computational Approaches to Modeling Language lab at New York University Abu Dhabi, and a National Endowment for the Humanities grant, award HAA-256078-17.",
"We also greatly appreciate the feedback of three anonymous reviewers."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Current approaches to testing and debugging NLP models rely on highly variable human creativity and extensive labor, or only work for a very restrictive class of bugs.",
"We present AdaTest, a process which uses large scale language models (LMs) in partnership with human feedback to automatically write unit tests highlighting bugs in a target model.",
"Such bugs are then addressed through an iterative text-fix-retest loop, inspired by traditional software development.",
"In experiments with expert and non-expert users and commercial / research models for 8 di erent tasks, AdaTest makes users 5-10x more e ective at finding bugs than current approaches, and helps users e ectively fix bugs without adding new bugs .",
"Although NLP models are often underspecified and exhibit various generalization failures, finding and fixing such bugs remains a challenge.",
"Current approaches include frameworks for testing (e.g. CheckList; Ribeiro et al., 2020), error analysis (Wu et al., 2019), or crowdsourcing (e.g. Dynabench; Kiela et al., 2021), all of which depend on highly variable human creativity to imagine bugs and extensive labor to instantiate them.",
"Out of these, only crowdsourcing can potentially fix bugs when enough data is gathered.",
"On the other hand, fully automated approaches such as perturbations (Belinkov and Bisk, 2018; Prabhakaran et al., 2019), automatic adversarial examples (Ribeiro et al., 2018), and unguided data augmentation (Yoo et al., 2021; Wang et al., 2021) are severely restricted to specific kinds of problems (e.g. Ribeiro et al. (2018) only deal with inconsistent predictions on paraphrases).",
"Despite their usefulness, current approaches do not allow a single user to easily specify, discover, and fix undesirable behaviors.",
"In this work, we present Adaptive Testing (AdaT-est), a process and tool 1 that leverages the complementary strengths of humans and large scale language models (LMs) to find and fix bugs in NLP models.",
"The LM is tasked with the slow creative burden (Kahneman, 2011) of generating a large quantity of tests adaptively targeted against the model being tested, while the user steers the LM by only selecting high quality tests and organizing them into semantically related topics which drastically improves LM generation and guides it towards areas of interest.",
"In an inner Testing Loop (Figure 1, unrolled in Figure 2), users start with a set of unit tests in a topic.",
"The LM then generates many similar tests that are designed to highlight bugs in the target model, of which the user only reviews the top few failing or near-failing tests (Figure 2A), adding valid tests to the current topic or organizing them into additional sub-topics (Figure 2B).",
"These user-1 https://github.com/microsoft/adatest 3253 filtered tests are included in the LM prompt for the next round of suggestions, nudging them toward the intersection between user interest and model failure.",
"Repeating the Testing Loop results in hill climbing behavior, where even when users cannot find model failures on their own, they can start from a small set of passing tests and quickly iterate with the LM to produce a large set of tests that reveal model failures.",
"Once enough bugs are discovered, the user engages in an outer Debugging Loop (Figure 1), performing an operation to fix bugs (e.g. finetuning on failing tests), and (crucially) testing the model again to verify that new bugs were not introduced.",
"AdaTest can be seen as an application of the test-fix-retest loop from software engineering to NLP.",
"We demonstrate the usefulness and generality of AdaTest by having users with diverse skill sets find and fix bugs in state-of-the-art models for a wide variety of tasks and domains.",
"In controlled user studies, expert users consistently discovered 5x more bugs per minute with AdaTest (com-pared to CheckList), while users with no technical background discovered 10x more (compared to a tool similar to Dynabench).",
"Our experiments indicate AdaTest's Debugging Loop reliably fixes bugs without introducing new ones, in contrast to other forms of data augmentation (templates, counterfactuals (Wu et al., 2021), manual GPT-3 prompting).",
"Finally, we present case studies where experts and non-experts use AdaTest in the wild on commercial models, finding and fixing a large quantity of previously unknown bugs (e.g. resulting in an 11 . 1 F1 improvement over expert GPT-3 augmentation).",
"The fundamental unit of specification in AdaTest is a test , defined as an input string or pair and an expectation about the behavior of the model (Ribeiro et al., 2020).",
"The expectation can specify what the output should or should not be (e.g. for sentiment analysis f(This is so great!!) = pos , f(It's not bad) (cid:44) neg ), a property on perturbations such as invariance (e.g. f(good) = f(good.) ), or a property of the output (e.g. substring containment in translation; f en-to-pt (The cake's icing) (cid:43) cereja , or the output of a classifier c( ) for text generation; c(f gen (Immigrants are)) (cid:44) toxic ).",
"When a test is applied to a model, it produces a test failure score , such that failing tests have high scores, while passing tests have low scores.",
"The score may be a binary pass / fail indicator, or a continuous indicator of how strongly a test passes / fails, e.g. in Figure 2 the score is the model's margin of confidence for class negative.",
"To evaluate model behavior at varying levels of abstraction, tests are organized into a test tree where each internal node is a topic .",
"For example, with the 3-way Sentiment Analysis model in Figure 2, we start with the / Sensitive topic within the test tree, and organize it further by defining as children the subtopics / Sensitive / Racial and / Sensitive / Immigration, each containing related tests and subtopics.",
"These flexible test trees are built out by the user as they explore model behavior.",
"This allows for fine grained evaluation and helps both the user and the LM focus, by testing one topic at a time.",
"They are also persistent sets of unit tests that can be applied to new model versions, iteratively updated, and shared with the community as starting points for testing other models.",
"Writing tests that expose bugs in NLP models is hard for both humans and LMs, but they have complementary strengths and weaknesses.",
"LMs can generate and run hundreds of test proposals based on existing tests, but these tests are often invalid and don't represent the behavior expected by the user.",
"In contrast, humans can quickly perceive if a test is valid or invalid, but can write new tests only slowly (Kahneman, 2011), and with high variability depending on user expertise and creativity.",
"The Testing Loop is designed to leverage these complementary strengths through an iterative optimization process: at each iteration, the LM 2 proposes a set of new tests for a topic, and the user accepts those that are valid, high scoring, and within the topic's scope.",
"These accepted tests are then used by the LM to generate the next round of suggestions.",
"This loop is similar in spirit to Markov-Chain Monte-Carlo (Hastings, 1970), with the LM as the proposal function and the user accepting / rejecting samples.",
"Test proposals for a topic are generated by concatenating several tests (7 by default) from the topic into a prompt to the LM, selected based on test score, diversity, and randomization (details in Appendix A), such that high scoring tests tend to be placed later in the prompt, where they have more impact on the output (Zhao et al., 2021).",
"When there are not enough tests in the current topic, we use tests from nearby topics, prefix these demonstration tests with their topic location, and ask the LM to generate a completion conditioned on the current topic.",
"This allows for zero shot generation in new empty topics when related topics already exist in the test tree.",
"For a given prompt, we sample a large number of independent completions (typi-cally 25-150) from the LM as test proposals, which are run through the target model, sorted by test failure score, and presented to the user, such that users typically inspect only the top few likely test failures.",
"The user then does an organization round , where they accept good test suggestions, and potentially reorganize the hierarchy locally, e.g. by grouping tests into subtopics (which can also be auto-generated by the LM).",
"Because the user rarely writes tests manually, and only looks at a small fraction of the tests generated by the LM, they can explore a vastly larger space of tests than would be possible with manual tools.",
"Because the user explicitly labels tests as valid, the LM does not need 2 We use GPT-3 (Brown et al., 2020), but support others.",
"to be highly accurate in its test generation it just needs to be good enough to ensure a reasonable fraction of top-scoring generated tests are valid.",
"An example unrolled Testing Loop is shown in Figure 2, where we initialize the / Sensitive topic with test cases from a CheckList (Ribeiro et al., 2020) that did not identify failures of this type on commercial models.",
"A round of LM test suggestions, however, quickly identifies notable failures in Microsoft's Text Analytics ( q ), one of the models tested by Ribeiro et al. (2020), such as predicting I am a racial minority as negative (Figure 2A).",
"In Figure 2B the user ignores invalid tests (e.g. f(I'm tired of being silenced) (cid:44) neg has the wrong expectation) and organizes valid suggestions into the new subtopics / Sensitive / Racial and / Sensitive / Immigration.",
"For the next round of the Testing Loop (Figure 2C), the user chooses / Sensitive / Immigration as the focus topic (it only has borderline failures), and gets LM suggestions (Figure 2D) to find higher scoring failures.",
"The iterative nature of the Testing Loop allows the user to give constant feedback, nudging the LM towards high quality tests with high test score .",
"Suggestions help the user both explore (finding new subtopics, Figure 2B) and exploit (hill-climb towards high scoring tests, as in Figure 2C).",
"The organization step keeps the LM on-topic, with a hierarchy that adapts to model behavior rather than being fixed ahead of time (Kulesza et al., 2014).",
"A few rounds of AdaTest reveals many sensitive subtopics with a large number of positive / neutral examples predicted as negative with high confidence, including the ones in Figure 2 and others such as / Sensitive / Homeless (e.g. I opened a homeless shelter once.), / Sensitive / Survivor (e.g. I am a cancer survivor), and / Sensitive / Disability (e.g. I am hard of hearing).",
"In the outer Debugging Loop (Figure 1, unrolled in Figure 3) the user fixes bugs discovered in the Testing Loop.",
"We do this by finetuning the model on the tests, but other strategies such as collecting more data or adding constraints are also possible.",
"Adding the tree to training data in the fix step in-validates it for testing, which is not an issue due to the lightweight nature of the Testing Loop (but would be for tests that are costly to produce, e.g. CheckList).",
"The re-test adaptation (i.e. running the Testing Loop again) is critical, as the process 3255 Pass | Fail f(I am an undocumented new hire) neg f(I am for refugee immigration) neg ... f(I am an undocumented new hire) neg f(I am for refugee immigration) neg ...",
"of fixing a bug often overcompensates, introducing shortcuts or bugs in the initial rounds.",
"For example, finetuning a RoBERTa-Large sentiment model on the test tree in Figure 2 inadvertently results in a model that often predicts neutral even on very positive / negative sentences about immigration (Figure 3; I oppose the muslim ban).",
"Another model might be fixed for the discovered subtopics, but still broken on related subtopics (e.g. I have a work visa).",
"The user does not have to exhaustively identify every possible shortcut or imbalance ahead of time, since AdaTest adaptively surfaces and fixes whatever bugs are introduced in the next rounds of testing and debugging.",
"Thus, the Debugging Loop serves as a friendly adversary, pushing the boundaries of the current specification until a satisfactory model is produced.",
"Even though AdaTest is adaptive to the specific model being tested, we observe that existing AdaTest trees are typically good starting points when testing new models.",
"To illustrate this, we run the test tree in Figure 2 through Google Cloud's Natural Language ( ), and observe that most of the topics immediately reveal a variety of failures (with no adaptation).",
"One exception is the / Sensitive / Immigration topic, on which has no immediate failures.",
"However, a single round of suggestions surfaces within-topic failure patterns (e.g. I am an immigrant myself, I am an immigrant, my parents are not. are both predicted as nega-tive), which are easily exploited in further rounds.",
"This augmented topic does not reveal any failures on Amazon's Comprehend ( (cid:192) ), but a single round of suggestions reveals related bugs (e.g. I am a DREAMer, I am a DACAmented educator) that can be expanded in further rounds.",
"In Figure 4 we show a much more extreme form of adaptation we start with a test tree from q Sentiment Analysis, and adapt a few of its topics to Translate (English (cid:41) Portuguese (cid:41) English) by running a few rounds of the Testing Loop.",
"While model outputs are di erent and thus test expectations need to be adjusted, certain aspects of the input are relevant across tasks (e.g. Negation, Sensitive inputs), and having a starting set of tests makes it easy to bootstrap the Testing Loop.",
"We then switch the model to q Translate and adapt this new topic tree to both (English (cid:41) Portuguese (cid:41) English) and (English (cid:41) Chinese (cid:41) English).",
"In every case, we easily discover a variety of in-topic bugs, even though these are mature products and we use a small toy test tree.",
"This illustrates how AdaTest makes it easy to adapt an existing tree to a new model, even if the test tree was organized using a di erent model or even a di erent task.",
"We present controlled user studies on the Testing Loop with both expert and non-expert users (3.1), followed by controlled experiments on the Debugging Loop (3.2).",
"Finally, we present case studies where AdaTest is used in the wild (3.3).",
"Expert testing We ran a user study to quantitatively evaluate if AdaTest makes experts better at writing tests and finding bugs in models, when compared to the SOTA in NLP testing (CheckList).",
"3 We recruited ten participants with a background in ML and NLP from industry and academia, and asked them to test two models:",
"1) a commercial sentiment classifier ( q ), and",
"2) GPT-2 (Radford et al., 2019) used for next word auto-complete.",
"Users completed eight separate tasks, where each task is a unique combination of a model (sen-timent or auto-complete), topic (see Figure 5), and tool (AdaTest or CheckList).",
"For each task, participants start with a set of four (passing) sample tests 3 To control for di erences due to interface design, we created a matching web interface for CheckList providing real-time model scoring for tests.",
"inside a specific topic, and try to find as many on-topic model failures as possible within 8 minutes.",
"The ordering between tools is randomized, while the order of model and topic is fixed (Figure 5).",
"We present the average number of discovered model failures per minute in Figure 5, where we observe a 5-fold improvement with AdaTest, an e ect persistent across models and users.",
"Among all 80 user + task scenarios, a user found less failures with AdaTest in only one case, and by a single test.",
"Interestingly, Ribeiro et al. (2020) had tests in the same topics, with very low error rates for the same model (4% for a test that included Clear Positives, 0% for Negated positives), while study participants were able to find many failures, e.g. I really like this place (predicted as neutral), Everything was freaking sensational (predicted as negative), I didn't think the food was that good and I couldn't wait to leave (both predicted as positive).",
"Qualitatively, users explored a much wider variety of behaviors with AdaTest, even considering Check-Lists' template capabilities.",
"When the burden of test generation is lifted from the user, it is much easier to explore multiple variations on themes, which are sometimes required to find bugs.",
"For example, I really liked this place is correctly predicted as positive, while I really like this place is (incor-rectly) predicted as neutral.",
"Similarly, I will not be coming back is correctly predicted as negative, while I will not be coming back, I am sure I can find a better place is predicted as positive.",
"AdaTest not only surfaces such variations, but also hill-climbs towards them with user feedback, e.g. a user iteratively added the following progression of suggested tests, with model confidence for positive in parentheses: This is not good (0), I didn't think the pizza was any good (0.28), I didn't think the Thai escargot was good (0.6), I didn't think the eggs were very good (0.94).",
"Non-expert testing In order to evaluate if AdaTest helps non-experts find bugs, and how users' backgrounds impact the process, we recruited 24 participants equally divided between those who self-identify as progressive or conservative.",
"These were all in the U.S., with a diverse range of ages and occupations, and no background in data science, programming, or ML.",
"We asked users to test the Perspectives API toxicity model for content moderation, as an example of an application that can impact the general public in group-specific ways.",
"Users tried to find non-toxic statements predicted as toxic for two topics: Left (progressive), and Right (conservative) political opinions.",
"We further instructed them to only write statements they 3257 would personally feel appropriate posting online, such that any model failures discovered are failures that would impact them directly.",
"When testing the topic that does not match their perspective, they were asked to role-play and express appropriate comments on behalf of someone from the opposite political perspective.",
"For each topic, users test the model with an interactive interface designed to be an improved version of Dynabench (predictions are computed at each keystroke, making trial-and-error much faster) for 5 minutes, followed by 10 minutes of AdaTest (topic order is randomized).",
"We present the results in Figure 6A, where we observe a 10x increase in test failures per minute with AdaTest.",
"We believe most of the gain is explained by the automatic adversarial exploration done by the LM (rather than the user), coupled with interactive hill climbing on failed tests.",
"4 We recruited six additional participants to verify if the model failures for their political perspective are things they could see themselves appropriately posting online, and report the validation rate in Figure 6B.",
"Participants had their tests validated by additional raters twice as often when they were writing tests reflecting their own political perspective (in-group vs out-group).",
"These results indicate that non-experts with AdaTest are much more e ective testers, even with minimal instruction and experience.",
"The fact that users writing tests for another group resulted in a much poorer representation of that group indicates it might be important to find testers from di erent groups that could be impacted by a model.",
"Since it is often not practical to find experts from every impacted group, empowering non-experts with a tool like AdaTest can be very valuable.",
"We evaluate the scenario where a user has found a bug (or set of bugs) and wants to fix it.",
"As base models, we finetune RoBERTa-Large for duplicate question detection on the QQP dataset (Wang et al., 2019), and for 3-way sentiment analysis on the SST dataset (Socher et al., 2013).",
"We rely on CheckList suites made available by Ribeiro et al. (2020) for evaluation, using a 20% failure rate threshold for a topic to fail.",
"The base model fails 22 out of 53 QQP topics and 11 out of 39 Sentiment topics.",
"4 Part of the gain may be from users learning about the model in the Dynabench condition, but a loose upper bound on this e ect is only 2.5x, estimated by the improvement in the Dynabench condition between the first and second topics.",
"We create data in order to fix a topic by either taking n = 50 examples from the topic's data in the CheckList condition, 5 or starting from a seed of 5 examples and running the Debugging Loop with AdaTest until finding failures becomes qualitatively di cult (on average 2 . 83 rounds for QQP and 3 . 83 rounds for Sentiment ), yielding an average of 41 .",
"6 tests for QQP and 55 .",
"8 tests for Sentiment .",
"We follow this process for 6 distinct high failure rate topics in each task.",
"Given a set of fixing data from a single test topic or from multiple topics, we finetune RoBERTa-Large from the previous checkpoint on an equal mixture of fixing data and data from the original training set to prevent catastrophic forgetting (McCloskey and Cohen, 1989), until convergence.",
"Ideally, we want to fix the original topic (and perhaps a few more which are also impacted by similar bugs) without adding new bugs, and thus we evaluate the fixed models by measuring how many topics in the original CheckList suite they fix or break, i.e. move from error rate from greater than 20% to lower than 20% 6 or vice versa.",
"For each set of fixing data, we finetune RoBERTa 3 times with di erent random seeds, draw 5 , 000 bootstrap samples of the predictions, and consider that a topic is fixed or broken if the change is significant with an FDR significance level less than 0 .",
"05 (Benjamini and Hochberg, 1995).",
"We present the results in Figure 7, where we vary the number of topics used for training in the x axis (for each tick, we sample 3 random topic 5 Similar results were observed with di erent n , up to 500.",
"subsets of size x and average the results).",
"In the vast majority of cases, AdaTest fixes the topics used for training and a number of other topics without breaking any topics, while CheckList data often introduce new bugs (and thus break other test topics).",
"Part of this may be due to higher diversity in terms of sentence structure and length in the AdaTest generated data, as compared to a fixed CheckList template.",
"However, models finetuned only on data from the first round of the Testing Loop (roughly equivalent to CheckList, but with more diversity) also tend to break other topics, which supports the importance of an iterative debugging loop.",
"Qualitatively, we repeatedly observed the phenomenon illustrated in Figure 3, where the model initially uses oversimplified shortcuts to fix a set of tests, i.e. data from a single round often introduces non-obvious bugs that only get discovered and fixed in following rounds.",
"For example, one of the topics for QQP is f(more X, less antonym(X)) = dupl.",
", with examples like (How do I become more pa-tient, How do I become less irritable).",
"Ribeiro et al. (2020) anticipated a potential ordering shortcut, since the topic also contains examples of (less X, more antonym(X)).",
"After training on such data, AdaTest surfaces a bug where examples in the form (more X, more antonym(x)) are predicted as duplicates, as well as examples of unrelated predicates like (more British, less American).",
"None of the topics in the suite capture these exact behaviors, but similar shortcuts break topics that are present such as f(more X, less X) (cid:44) dupl.",
".",
"The iterative Debugging Loop identifies and fixes such shortcuts, leading to more robust bug fixing.",
"We evaluate accuracy on the validation dataset and on challenging out of domain datasets (Zhang et al., 2019; Potts et al., 2021) after training on all 6 topics (Table 1).",
"In both tasks, AdaTest augmentation has a negligible or non-significant impact on in-domain accuracy, and improves performance on out of domain data.",
"While AdaTest may have introduced new bugs not caught by the CheckList test suite or these additional test sets, the improved performance on all of these indicate that the Debugging Loop is not fixing bugs at the expense of significantly degrading performance elsewhere.",
"We also compare AdaTest to labeled Polyjuice counterfactuals (Wu et al., 2021) available for QQP.",
"Despite having more data (thousands vs AdaTests' 250 labels), the results are strictly inferior (accu-racy 37 . 8 on PAWS, fixed 2 topics and broke 1, while Adatest fixes 11 and breaks none).",
"Non-expert testing of non-classification models In order to evaluate if AdaTest would help non-experts test models for more complex tasks, we recruited a bilingual speaker with no technical background, and asked them to test a translation system and an NER system commercialized by a large software company (and thus subject to extensive prior testing and validation).",
"Specifically, we asked the user to find English to Portuguese translations with inconsistent or wrong gender assignments (e.g. the equivalent of My (female) wife (female) is a (male) doctor (male)), and to test NER predictions of the PERSON category.",
"For each task, after being presented with examples of tests in each topic, the user wrote tests for 20 minutes, divided between an interactive interface like Dynabench and AdaTest.",
"Even though the tasks are very di erent (gener-ation and per-token classification), the results are consistent with Section 3.1, with the user finding 3259 many more bugs with AdaTest (32 vs 4 on translation, 16 vs 0 on NER).",
"Qualitatively, adaptive test suggestions helped the user find bugs covering a much wider range of phenomena than all of the attempts without assistance.",
"For example, the user manually wrote di erent combinations of 15 subjects and 11 predicates for translation, all related to family members and professions (e.g. My mom is a doctor).",
"With AdaTest, they found bugs with 30 subjects and 27 predicates, with much more diversity in both (e.g. The woman with the red dress is my best friend).",
"AdaTest helped the user find a variety of sentences where the NER model predicted the label Person for names of organizations (e.g. What I do for Black Booty is provide financial advice), products (e.g. I think Alikat is a good form of cash money), and animals (e.g. Nathan the dog likes to spend time at the farm), while they could not find any bugs unassisted.",
"Text to video matching To gauge the usefulness of AdaTest for established model development and maintenance pipelines, we shared AdaTest with a ML development team in charge of a multi-modal classifier that matches textual inputs with a database of videos.",
"While their production model had gone through several external red-teaming reviews, a single short (unaided) AdaTest session revealed novel gender bias and related issues that were then fed back into their custom mitigation pipeline.",
"The team reported that being able to quickly generate diverse model-targeted tests, while at the same time creating a suite of tests for future model versions was extremely valuable, and they have since sought to develop adaptive test trees for their whole suite of production models.",
"Task detection A team of ML scientists at a large software company was building a model to predict whether a sentence in an email or meeting note represents an action item or task, such as I will run the experiment tomorrow.",
"Prior to our engagement, the team had gone through a painstaking process of gathering and labeling data, using CheckList (Ribeiro et al., 2020) to find bugs, and generating data with GPT-3 to fix the discovered bugs.",
"The team was thus well versed in testing, and had been trying to accomplish the same goals that AdaTest is built for, using the same exact LM.",
"After a five minute demo, two of the team members engaged in the Testing Loop for an hour.",
"In this short session, they found many previously Random Baseline GPT-3 aug AdaTest Task dataset 1 10.0 51.4 65.6 77.3 Task dataset 2 18.1 54.4 66.0 76.5 Table 2: F1 score on two hidden task datasets.",
"unknown bugs, with various topics they hadn't thought about testing (e.g. While X, task, as in While we wait for the manufacturer, let's build a slide deck), and some they had tested and (incor-rectly) thought they had fixed (e.g. false positives related to waiting, such as John will wait for the decision or Let's put a pin on it).",
"When testing name invariances with CheckList they hadn't included personal pronouns (e.g. Karen will implement the feature = I will implement the feature), which AdaTest revealed the model fails on.",
"One team member ran the Debugging Loop for approximately 3 hours, fixing bugs with the same procedure as in Section 3.2.",
"Consistent with the previous results, they found that fixing bugs initially led to new bugs being introduced, e.g. fixing false negatives on passive statements (the experiment will be run next week) lead to false positives on non-task factual descriptors (the event will be attended by the dean), which were surfaced by AdaTest and fixed in the next round.",
"In order to compare the results of using AdaTest to their previous e orts, we collected and labeled two new datasets from sources they hadn't used as training data.",
"We present the F1 scores of models augmented either with their GPT-3 generated data or on AdaTest data in Table 2, where AdaTest shows significant improvement despite involving much less e ort.",
"Qualitatively, the team noted that finding bugs with AdaTest was much easier than with CheckList, by virtue of the extensive suggestions made by the LM.",
"Similarly, after noticing (and fixing) potential shortcuts in multiple rounds of the Debugging Loop, the team realized that their prior GPT-3 augmentation was almost certainly liable to such shortcuts, and thus less e ective.",
"We evaluated AdaTest on 8 di erent tasks spanning text classification, generation, and per-token prediction.",
"In terms of finding bugs , we compare AdaTest to experts using CheckList and non-experts using a more responsive version of Dynabench.",
"Users 3260 consistently found many more bugs per minute with AdaTest on research models and commercial models at di erent development stages (early version, pre-release, and mature models in production).",
"The fact that AdaTest requires minimal training and is easy enough to be used by users without any technical background is an asset, especially when it is important to have testers that represent diverse groups that may be negatively impacted by bugs.",
"In terms of fixing bugs , we compared the Debugging Loop to naively augmenting data with CheckList templates, using Polyjuice counterfactuals, and having an expert use GPT-3 to create additional data.",
"In every case, AdaTest improved performance more than alternatives, and crucially did not add new bugs that degrade performance on available measurements, due to the iterative nature of the Debugging Loop.",
"In contrast to alternatives, further testing with AdaTest is low-cost, and thus this augmentation does not have the e ect of invalidating costly evaluation data (e.g. invalidating CheckList tests that are laborious to create).",
"In fact, test trees from previous sessions can be used to test new models, or to bootstrap a new AdaTest session.",
"Even though we used CheckList and Dynabench as baselines in the previous section, our results indicate that these and other approaches (Gardner et al., 2020; Kaushik et al., 2019) where human creativity and e ort are bottlenecks (Bhatt et al., 2021) would benefit from the greatly enhanced bug discovery productivity made possible by AdaTest.",
"On the other hand, CheckList as a framework provides great guidance in organizing the test tree, enumerating important capabilities and perturbations to be tested, as well as a tool for systematically applying the test tree to future models.",
"Similarly, Dynabench provides model serving capabilities and a crowdsourcing platform that would greatly enhance AdaTest, especially as users share test trees and adapt them to new models.",
"In terms of fixing bugs, fully automatic data augmentation with LMs (Yoo et al., 2021; Wang et al., 2021) cannot incorporate human specification beyond already existing data, nor debug phenomena that is very far from the existing data.",
"On the other hand, general purpose or contrastive counterfactuals have shown mixed or marginally positive results (Huang et al., 2020; Wu et al., 2021) similar to what we observed in Section 3.2, except when large quantities of data are gathered (Nie et al., 2020).",
"Our hypothesis is that underspecification (D'Amour et al., 2020) is a major factor limiting the benefit of many counterfactual augmentation techniques.",
"We observed that the first rounds of the Debugging Loop often decrease or maintain overall performance until additional data from later rounds specifies the correct behavior more thoroughly, which indicates that counterfactual data targeted precisely where the model is underspecified is often more e ective than non-targeted data.",
"If true, this hypothesis argues for AdaTest's fast iteration in the Debugging Loop, rather than longer cycles (e.g. Dynabench rounds can take months).",
"AdaTest encourages a close collaboration between a human and a language model, yielding the ben-efits of both.",
"The user provides specification that the LM lacks, while the LM provides creativity at a scale that is infeasible for the user.",
"AdaTest o ers significant productivity gains for expert users, while also remaining simple enough to empower diverse groups of non-experts.",
"The Debugging Loop connects model testing and debugging to e ectively fix bugs, taking model development a step closer towards the iterative nature of traditional software development.",
"We have demonstrated AdaTest's e ectiveness on classification models (sentiment analysis, QQP, toxicity, media selection, task detection), generation models (GPT-2, translation), and per-token models (NER), with models ranging from well-tested production systems to brand new applications.",
"Our results indicate that adaptive testing and debugging can serve as an e ective NLP development paradigm for a broad range of applications.",
"To help support this, AdaTest (with various test trees) is open sourced at https://github.com/microsoft/adatest .",
"We thank Adarsh Jeewajee, Carlos Guestrin, Ece Kamar, Fereshte Khani, Gregory Plumb, Gabriel Ilharco, Harsha Nori, Sameer Singh, and Shikhar Murty for helpful discussions and feedback.",
"We also thank Bruno Melo, Hamid Palangi, Ji Li, and Remmelt Ammerlaan for pilot testing / case studies.",
"Finally, we thank Tongshuang Wu for all of the above and helping us think about figures, checking translations, o ering LAT E X advice, and other miscellaneous help."
] | [
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other",
"other"
] |
[
"Stance detection on social media can help to identify and understand slanted news or commentary in everyday life.",
"In this work, we propose a new model for zero-shot stance detection on Twitter that uses adversarial learning to generalize across topics.",
"Our model achieves state-of-the-art performance on a number of unseen test topics with minimal computational costs.",
"In addition, we extend zero-shot stance detection to new topics, highlighting future directions for zero-shot transfer.",
"Stance detection, the problem of automatically identifying positions or opinions in text, is becoming increasingly important for social media (e.g., Twitter), as more and more people turn to it for their news.",
"Zero-shot stance detection, in particular, is crucial, since gathering training data for all topics is not feasible.",
"While there has been increasing work on zero-shot stance detection in other genres (Allaway and McKeown, 2020; Vamvas and Sennrich, 2020), generalization across many topics in social media remains an open challenge.",
"In this work, we propose a new model for stance detection that uses adversarial learning to generalize to unseen topics on Twitter.",
"Our model achieves state-of-the-art zero-shot performance on the majority of topics in the standard dataset for English stance detection on Twitter (Mohammad et al., 2016) and also provides benchmark results on two new topics in this dataset.",
"Most prior work on English social media stance detection uses the SemEval2016 Task 6 (SemT6) dataset (Mohammad et al., 2016) which consists of six topics.",
"While early work trained using five topics and evaluated on the sixth (e.g., Augenstein et al. (2016); Zarrella and Marsh (2016); Wei et al. (2016)), they used only one topic, Donald Trump' Denotes equal contribution.",
"(DT), for evaluation and did not experiment with others.",
"Furthermore, recent work on SemT6 has focused on cross-target stance detection (Xu et al., 2018; Wei and Mao, 2019; Zhang et al., 2020): training on one topic and evaluating on one different unseeen topic that has a known relationship with the training topic (e.g., legalization of abortion to feminist movement).",
"These models are typically evaluated on four different test topics (each with a different training topic).",
"In contrast, our work is a hybrid of these two settings: we train on five topics and evaluate on one other, but unlike prior work we do not assume a relationship between training and test topics and so we use each topic in turn as the test topic.",
"This illustrates the robustness of our model across topics and additionally allows zero-shot evaluation of SemT6 on two new topics that were previously ignored by cross-target models (atheism' and climate change is a real concern').",
"Recently, Allaway and McKeown (2020) introduced a new dataset of news article comments for zero-shot stance detection.",
"While this dataset evaluates generalization to many new topics when learning with many topics and only a few examples per topic, there are no datasets for social media with this setup.",
"Specifically, current datasets for stance detection on Twitter (Mohammad et al., 2016; Taul et al., 2017; Kk, 2017; Tsakalidis et al., 2018; Lai et al., 2020) have only a few topics but many examples per topic.",
"Therefore, zero-shot stance detection on social media is best modeled as a domain adaptation task.",
"To model zero-shot topic transfer as domain-adaptation, we treat each topic as a domain.",
"Following the success of adversarial learning for domain adaptation (Zhang et al., 2017; Ganin and Lempitsky, 2015), we use a discriminator (adversary) to learn topic-invariant representations that allow better generalization across topics.",
"Although, Wei and Mao (2019) also proposed adversarial learning for stance detection, their model relies on knowledge transfer between topics (domains) and so is only suited to the cross-target, not zero-shot, task.",
"In contrast, our work adopts a successful cross-target architecture into a domain adaptation model without requiring a priori knowledge of any relationship between topics.",
"Our contributions in this work are: 1) we propose a new model for zero-shot stance detection on Twitter using adversarial learning that does not make assumptions about the training and test topics, and 2) we achieve state-of-the-art performance on a range of topics and provide benchmark zero-shot results for two topics not previously used in the zero-shot setting with reduced computational requirements compared to pre-trained language models.",
"Our models are available at: https: //github.com/MalavikaSrikanth16/ adversarial-learning-for-stance .",
"We propose a new model, TO picAD versarial Network, for zero-shot stance detection, that uses the domain-transfer architecture from Zhang et al. (2017) coupled with a successful stance model (Au-genstein et al., 2016) with an additional topic-specific attention layer, to produce topic-invariant representations that generalize to unseen topics (see Figure 1).",
"Let D be a dataset of examples, each consisting of a document d (a tweet), a topic t , and a stance label y .",
"The task is to predict a label y 2 { pro, con, neutral } , given d and t .",
"In domain-adaptation, adversarial learning forces the model to learn domain-invariant (i.e., topic-invariant) features that can then be transferred to a new domain.",
"To do this, a classifier and a discriminator ( adversary ) are trained jointly from the same feature representation to maximize the classifier's performance while simultaneously minimizing the discriminator's.",
"(a) Topic-oriented Document Encoder We encode each example x = ( d, t, y ) using bidirectional conditional encoding (BiCond) (Augenstein et al., 2016), since computing representations conditioned on the topic have been shown to be crucial for zero-shot stance detection (Allaway and McKeown, 2020).",
"Specifically, we first encode the topic as h t using a BiLSTM (Hochreiter and Schmidhu-ber, 1997) and then encode the text using a second BiLSTM conditioned on h t .",
"To compute a document-level representation v dt , we apply scaled dot-product attention (Vaswani et al., 2017) over the output of the text BiLSTM, using the topic representation h t as the query.",
"This encourages the text encoder to produce representations that are indicative of stance on the topic and so would improve classification performance.",
"To prevent the adversary corrupting the encoder to reduce its own performance, we add a document reconstruction term ( L recd ) to our loss function, as in Zhang et al. (2017), as well as a topic reconstruction term ( L rect ), to ensure the output of neither BiLSTM is corrupted.",
"We use a non-linear transformation over the hidden states of each BiLSTM for reconstruction.",
"The reconstruction loss is the mean-squared error between the reconstructed vectors and the original vectors, under the same non-linearity.",
"(b) Topic-invariant Transformation To allow the adversary to produce topic-invariant representations without removing stance cues and without large adjustments to v dt , we follow Zhang et al. (2017) and apply a linear transformation f v dt = W tr v dt that we regularize ( L tr ) to the identity I .",
"(c) Stance Classifier We use a two-layer feed-forward neural network with a ReLU activation to predict stance labels ` 2 { \u0000 1 , 0 , 1 } .",
"Since stance is inherently dependent on a topic, and the output of the transformation layer should be topic-invariant, we add a residual connection between the topic encoder h t and the stance classifier.",
"That is, we concatenate h t with f v dt before classification.",
"(d) Topic Discriminator Our topic discriminator is also a two-layer feed-forward neural network with ReLU and predicts the topic t of the input x , given the output of the transformation layer f v dt .",
"In order to learn representations invariant to both the source and target domains, we train the discriminator using both labeled data for the source topics from D and unlabeled data D ul for the zero-shot topic ( not from the test data), following standard practice in domain adaptation (Ganin and LempitBiLSTM BiLSTM v dt h t Stance FFNN Topic FFNN f v dt ` t t d L rect L recd L s L t maximize minimize PL tr Att ention residual connection",
"Our model, TOAD, is trained by combining the individual component losses.",
"For both the stance classifier and topic-discriminator we use cross-entropy loss ( L s and L t respectively).",
"Since we hypothesize that topic-invariant representations will be well suited to zero-shot transfer, we want to minimize the discriminator's ability to predict the topic from the input.",
"Specifically, we minimize L s while maximizing L t , which we do using gradient reversal during backpropagation (Ganin and Lempitsky, 2015).",
"Our final loss function is then L = \u0000 rec ( L recd + L rect ) + \u0000 tr L tr + L s \u0000 L t where \u0000 rec , \u0000 tr are fixed hyperparameters.",
"The hyperparameter gradually increases across epochs, following Ganin and Lempitsky (2015).",
"All loss terms except L s are computed using both labeled and unlabeled data.",
"Data In our experiments, we use the SemT6 dataset (see Table 1) used in cross-target studies (Mohammad et al., 2016).",
"For each topic t 2 T , we train one model with t as the zero-shot test topic.",
"Specifically, we use all examples from each of the five topics in { T \u0000 t } for training and validation (split 85 / 15 ) and test on all examples for t .",
"To train the topic-discriminator, we additionally use 2 k unlabeled tweets for the zero-shot topic t from the set collected by Augenstein et al. (2016).",
"Theses tweets are from the same time period as the SemT6 dataset ( 2016 ) and therefore are better suited for training a discriminator than newly scraped Tweets.",
"To select Tweets for each topic we use 1-2 keywords (see Table 1).",
"Baselines We compare against a BERT (Devlin et al., 2019) baseline that encodes the document and topic jointly for classification, as in Allaway and McKeown (2020) and BiCond bidirectional conditional encoding (2.2) without attention (Augenstein et al., 2016).",
"Additionally, we compare against published results from three prior models: SEKT using a knowledge graph to improve topic transfer (Zhang et al., 2020), VTN adversarial learning with a topic-oriented memory network, and CrossN BiCond with an additional topic-specific self-attention layer (Xu et al., 2018).",
"Hyperparameters We tune the hyperparameters for our adversarial model using uniform sampling on the development set with 20 search trials.",
"We select the best hyperparameter setting using the average rank of the stance classifier F1 (higher is better) and topic discriminator F1 (lower is bet-ter).",
"We remove settings where the discriminator F1 is < 0 .",
"01 , under the assumption that such low performance is the result of overly corrupt representations that will not generalize.",
"We use pre-trained 100-dimensional GloVe vectors (Pennington et al., 2014) in our models.",
"Our implementations of BERT and BiCond are trained in the same setting as TOAD (i.e., 5 topics for train/dev, 1 topic for test).",
"However, because CrossN, VTN, and SEKT are designed to learn relationships between topics, they are not suited to the zero-shot task (only the cross-target task) and therefore we report only their published cross-target results for the topic pairs (i.e., train on one, test on the other) DT $ HC and FM $ LA.",
"We note that since TOAD is trained using significantly DT HC FM LA A CC P C F avg P C F avg P C F avg P C F avg P C F avg P C F avg BERT 22.3 57.9 40.1 36.1 63.2 49.6 46.6 37.3 41.9 36.9 52.8 44.8 39.6 70.8 55.2 66.3 8.2 37.3 BiCond 17.0 43.9 30.5 18.9 46.5 32.7 31.7 49.5 40.6 27.1 41.7 34.4 2.3 59.7 31.0 16.5 13.5 15.0 CrossN -46.1 -41.8 -43.1 -44.2 ---VTN -47.9 -36.4 -47.8 -47.3 ---SEKT -47.7 -42.0 -51.3 -53.6 ---TOAD 40.0 58.9 49.5 35.3 67.1 51.2 41.5 66.7 54.1 30.6 61.7 46.2 17.7 74.5 46.1 45.4 16.5 30.9 \u0000 adv 29.0 54.1 41.5 32.1 66.4 49.3 39.8 46.1 43.0 32.0 46.4 39.2 7.5 72.0 39.8 37.4 22.",
"more data, our experiments evaluate not only model architectures but also the benefit of the zero-shot setting for topic-transfer.",
"As in prior work (e.g., Zhang et al. (2020)) we report F avg : the average of F1 on pro and con.",
"Our model TOAD achieves state-of-the-art results (see Table 2) on two (DT, FM) of the four topics used in cross-target stance detection (DT: Donald Trump, HC: Hillary Clinton, FM: Feminist Movement, LA: Legalization of Abortion).",
"These results are statistically significant ( p < 0 . 005 ) when compared to both the BERT baseline and to TOAD without the adversary 1 .",
"In addition we provide benchmark results on two topics (A: Atheism, CC: climate change is a real concern) that have not been used previously for zero-shot evaluation.",
"We also observe that TOAD is statistically indistinguishable from BERT on three additional topics (HC, LA, CC) while having only 0 .",
"5% as many parameters ( 600 k versus 110 mil).",
"As a result of this small size, TOAD can be trained using only the CPU and, because of it's recurrent architecture, would gain less from the increased parallel computation of a GPU (compared to a transformer-based model).",
"Therefore, TOAD has a potentially much lower environmental impact than BERT with similar (or better) performance on five 1 SEKT code is not available for computing significance.",
"(b) Using the vocabulary of the topic on the y -axis.",
"Analysis Since cross-target models (e.g., SEKT) rely on assumptions about topic similarity, we first analyze the impact of topic similarity on stance performance (see Figure 2).",
"Specifically, we compute the Jensen-Shannon divergence (Lin, 1991) between word distributions for pairs of topics to examine the impact of topic similarity on stance performance (see A.4 for details).",
"We use Jensen-Shannon divergence ( DJS ) because it has been shown to successfully distinguish domains (Ruder and Plank, 2017; Plank and van Noord, 2011).",
"Using the combined vocabulary of both topics in a pair (see Figure 2a), we observe that human notions of similarity (used to select pairs for cross-target models) may be flawed.",
"For example, while the cross-target pair DT $ HC is relatively similar, for the other standard cross-target pair, FM $ LA, FM is almost as similar to DT as to LA.",
"Since zero-shot transfer methods use all non-test topics for training, they avoid difficulties introduced by flawed human assumptions about similarity (e.g., about the ideological similarity of FM and LA).",
"We then examine, whether distributional similarity between topics does actually relate to cross-target ( T 1 ! T 2 ) stance performance.",
"Using the vocabulary for only one topic ( VT 1 ) per pair (see Figure 2b), we observe an inverse relationship between similarity and relative stance performance.",
"Specifically, relatively lower similarity (higher divergence) often leads to relatively higher stance performance.",
"For example, DJS ( HC || DT ) is higher than DJS ( DT || HC ) suggesting that a model trained on HC has less information about the word-distribution for DT than a model trained on DT has about HC.",
"However, the cross-target stance models trained in the HC !",
"DT setup (e.g., SEKT) actually perform relatively better than those trained in the DT !",
"HC setup.",
"This highlights a further problem in the cross-target setting: using similar topics may encourage models to rely on distributional patterns that do not correlate well with cross-topic stance labels.",
"Next, we examine how topic-invariant the representations from TOAD actually are, and the impact of this on stance classification.",
"We extract representations from our models, apply K-means clustering with k = 6 , and compare the resulting clusters to the gold topic labeling (see Table 3).",
"We examine representations from models trained with either zero-shot topic DT or HC because the improvement by the adversary is statistically significant for DT but not for HC.",
"We observe that for both topics, the clusters from TOAD representations are less aligned with topics.",
"This shows that using adversarial learning produces more topic-invariant representations than without it.",
"Furthermore, we see that the difference (in both homogeneity and completeness) between TOAD with and without the adversary is larger on DT than on HC ( \u0000 0 . 7 and \u0000 0 . 02 respectively).",
"This suggests that the stance detection performance difference between TOAD with and without the adversary is tied to the success of the adversary at producing topic-invariant representations.",
"That is, when the adversary is less successful, it does not provide much benefit to TOAD.",
"Finally, we conduct an ablation on the topic-specific components of TOAD (Table 4).",
"We observe that the residual topic and unlabeled data are especially important.",
"Note that while the keywords DT HCF avg \u0000 F avg \u0000 TOAD 49.5 51.2 \u0000 L rect 44.6 -4.9 52.5 +1.3 \u0000 residual topic 39.3 -10.2 43.4 -7.8 \u0000 D ul 40.0 -9.5 51.1 -0.1 Table 4: Ablation of TOAD with test sets DT and HC.",
"used to collect unlabeled data may favor the pro class (e.g., aborti ), we do not observe a preference for the pro class in our models, likely due to class imbalance (e.g., 20.9% pro DT).",
"Additionally, we observe that while the topic reconstruction L rect is important for DT, it actually decreases the performance of the HC model.",
"We hypothesize that this is because the adversary is less successful for HC and therefore L rect only increases the noise in the stance classification loss for HC.",
"Our results reaffirm the dependence of stance on the topic while also highlighting the importance of fully topic-invariant representations in order to generalize.",
"We propose a new model for zero-shot stance detection on Twitter that uses adversarial learning to produce topic-invariant representations that generalize to unseen topics.",
"Our model achieves state-of-the-art performance on a number of unseen topics with reduced computational requirements.",
"In addition, our training procedure allows the model to generalize to new topics unrelated to the training topics and to provide benchmark results on two topics that have not previously been evaluated on in zero-shot settings.",
"In future work, we plan to investigate how to extend our models to Twitter datasets in languages other than English.",
"We thank the Columbia NLP group and the anonymous reviewers for their comments.",
"This work is supported in part by DARPA under agreement number FA8750-18-2-0014 and by the National Science Foundation Graduate Research Fellowship under Grant No.",
"DGE-1644869.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements of DARPA, the U.S. Government, or the NSF.",
"We use a dataset collected and distributed for the SemEval2016 Task 6 (Mohammad et al., 2016) and used extensively by the community.",
"Data was collected from publicly available posts on Twitter using a set of manually identified hashtags (e.g., #NoMoreReligions and #Godswill, see http://saifmohammad.com/WebDocs/ Stance/hashtags_all.txt for a complete list).",
"All tweets with the hashtag at the end were collected and then post-processed to remove the actual hashtag.",
"Thus, there is no information on the gender, ethnicity or race of the people who posted.",
"Many of the tweets that we examined were Standard American English coupled with internet slang.",
"The intended use of our technology is to predict the stance of authors towards topics, where the topics are often political in nature.",
"This technology could be useful for people in office who want to understand how their constituents feel about an issue under discussion; it may be useful to decide on new policies going forward or to react proac-tively to situations where people are upset about a public issue.",
"For example, we can imagine using such a tool to determine how people feel about the safety of a vaccine or how they feel about immigration policies.",
"If the system is incorrect in its prediction of stance, end users would not fully understand how people feel about different topics.",
"For example, we can imagine that they may decide that there is no need to implement an education program on vaccine safety if the stance prediction tool inaccurately predicts that people feel good about vaccine safety.",
"The benefits of understanding, with some inaccuracy, how people feel about a topic, outweigh the situation where one has no information (or only information that could be gleaned by manually reading a few examples).",
"The technology would not be deployed, in any case, until accuracy is improved.",
"We also note that since many topics are political in nature, this technology could be used nefariously to identify people to target with certain types of political ads or disinformation (based on automatically identified beliefs) or by employers to identify political opinions of employees.",
"However, because the data does not include any user-identifying information, we ourselves are prevented from such usage and any future wrongful deployment of the technology in these settings would be a direct violation of Twitter's Terms of Service for developers 2 .",
"Given that we don't know the race of posters and we don't know whether African American Vernacular is fairly represented in the corpus, we don't know whether the tool would make fair predictions for people who speak this dialect.",
"Further work would need to be done to create a tool that can make fair predictions regardless of race, gender or ethnicity.",
"As noted in the paper, the environmental impact of training and deploying our tool is less than for all comparably performing models."
] | [
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method"
] |
[
"Active learning promises to alleviate the massive data needs of supervised machine learning: it has successfully improved sample efficiency by an order of magnitude on traditional tasks like topic classification and object recognition.",
"However, we uncover a striking contrast to this promise: across 5 models and 4 datasets on the task of visual question answering, a wide variety of active learning approaches fail to outperform random selection.",
"To understand this discrepancy, we profile 8 active learning methods on a per-example basis, and identify the problem as collective outliers groups of examples that active learning methods prefer to acquire but models fail to learn (e.g., questions that ask about text in images or require external knowledge).",
"Through systematic ablation experiments and qualitative visualizations, we verify that collective outliers are a general phenomenon responsible for degrading pool-based active learning.",
"Notably, we show that active learning sample efficiency increases significantly as the number of collective outliers in the active learning pool decreases.",
"We conclude with a discussion and prescriptive recommendations for mitigating the effects of these outliers in future work.",
"Today, language-equipped vision systems such as VizWiz, TapTapSee, BeMyEyes, and CamFind are actively being deployed across a broad spectrum of users.",
"1 As underlying methods improve, these systems will be expected to operate over diverse visual environments and understand myriad language inputs (Bigham et al., 2010; Tellex et al., 2011; Mei et al., 2016; Zhu et al., 2017; Anderson et al., 2018b; Park et al., 2019).",
"Visual Question Answering (VQA), the task of answering questions about 1 Applications can be found at https://vizwiz.org/ , https://taptapsee.com/ , https://www.bemyeyes.com/ , and https://camfindapp.com/ Q: What sport is she playing?",
"visual inputs, is a popular benchmark used to evaluate progress towards such open-ended systems (Agrawal et al., 2015; Krishna et al., 2017; Gordon et al., 2018; Hudson and Manning, 2019).",
"Unfortunately, today's VQA models are data hungry: Their performance scales monotonically with more training data (Lu et al., 2016; Lin and Parikh, 2017), motivating the need for data acquisition mechanisms such as active learning, which maximize performance while minimizing expensive data labeling.",
"While active learning is often key to effective data acquisition when such labeled data is difficult to obtain (Lewis and Catlett, 1994; Tong and Koller, 2001; Culotta and McCallum, 2005; Settles, 2009), we find that 8 modern active learning methods (Gal et al., 2017; Siddhant and Lipton, 2018; Lowell et al., 2019) show little to no improvement in sample efficiency across 5 models on 4 VQA datasets indeed, in some cases performing worse than randomly selecting data to label.",
"This finding is in stark contrast to the successful application of active learning methods on a variety of traditional tasks, such as topic classification (Siddhant and Lipton, 2018; Lowell et al., 2019), object recognition (Deng et al., 2018), digit classification (Gal et al., 2017), and named entity recognition (Shen et al., 2017).",
"Our negative results hold even when accounting for common active learning ailments: cold starts, correlated sampling, and uncalibrated uncertainty.",
"We mitigate the cold start challenge of needing a representative initial dataset by varying the size of the seed set in our experiments.",
"We account for sampling correlated data within a given batch by including Core-Set selection (Sener and Savarese, 2018) in the set of active learning methods we evaluate.",
"Finally, we use deep Bayesian active learning to calibrate model uncertainty to high-dimensional data (Houlsby et al., 2011; Gal and Ghahramani, 2016; Gal et al., 2017).",
"After concluding that negative results are consistent across all experimental conditions, we investigate active learning's ineffectiveness on VQA as a data problem and identify the existence of collective outliers (Han and Kamber, 2000) as the source of the problem.",
"Leveraging recent advances in model interpretability, we build Dataset Maps (Swayamdipta et al., 2020), which distinguish between collective outliers and useful data that improve validation set performance (see Figure 1).",
"While global outliers deviate from the rest of the data and are often a consequence of labeling error, collective outliers cluster together; they may not individually be identifiable as outliers but collectively deviate from other examples in the dataset.",
"For instance, VQA-2 (Goyal et al., 2017) is riddled with collections of hard questions that require external knowledge to answer (e.g., What is the symbol on the hood often associated with?) or that ask the model to read text in the images (e.g., What is the word on the wall?).",
"Similarly, GQA (Hudson and Manning, 2019) asks underspecified questions (e.g., what is the person wearing? which can have multiple correct answers).",
"Collective outliers are not specific to VQA, but can similarly be found in many open-ended tasks, including visual navigation (Anderson et al., 2018b) (e.g., Go to the grandfather clock requires identifying rare grandfather clocks), and open-domain question answering (Kwiatkowski et al., 2019), amongst others.",
"Using Dataset Maps, we profile active learning methods and show that they prefer acquiring collective outliers that models are unable to learn, explaining their poor improvements in sample efficiency relative to random sampling.",
"Building on this, we use these maps to perform ablations where we identify and remove outliers iteratively from the active learning pool, observing correlated improvements in sample efficiency.",
"This allows us to conclude that collective outliers are, indeed, responsible for the ineffectiveness of active learning for VQA.",
"We end with prescriptive suggestions for future work in building active learning methods robust to these types of outliers.",
"Our work tests the utility of multiple recent active learning methods on the open-ended understanding task of VQA.",
"We draw on the dataset analysis literature to identify collective outliers as the bottleneck hindering active learning methods in this setting.",
"Active Learning.",
"Active learning strategies have been successfully applied to image recognition (Joshi et al., 2009; Sener and Savarese, 2018), information extraction (Scheffer et al., 2001; Finn and Kushmerick, 2003; Jones et al., 2003; Culotta and McCallum, 2005), named entity recognition (Hachey et al., 2005; Shen et al., 2017), semantic parsing (Dong et al., 2018), and text categorization (Lewis and Gale, 1994; Hoi et al., 2006).",
"However, these same methods struggle to outperform a random baseline when applied to the task of VQA (Lin and Parikh, 2017; Jedoui et al., 2019).",
"To study this discrepancy, we systematically apply 8 diverse active learning methods to VQA, including methods that use model uncertainty (Abramson and Fre-und, 2004; Collins et al., 2008; Joshi et al., 2009), Bayesian uncertainty (Gal and Ghahramani, 2016; Kendall and Gal, 2017), disagreement (Houlsby et al., 2011; Gal et al., 2017), and Core-Set selection (Sener and Savarese, 2018).",
"Visual Question Answering.",
"Progress on VQA has been heralded as a marker for progress on general open-ended understanding tasks, resulting in several benchmarks (Agrawal et al., 2015; Malinowski et al., 2015; Ren et al., 2015a; Johnson et al., 2017; Goyal et al., 2017; Krishna et al., 2017; Suhr et al., 2019; Hudson and Manning, 2019) and models (Zhou et al., 2015; Fukui et al., 2016; Lu et al., 2016; Yang et al., 2016; Zhu et al., 2016; Wu et al., 2016; Anderson et al., 2018a; Tan and Bansal, 2019; Chen et al., 2020).",
"To ensure that our negative results are not dataset or model-specific, we sample 4 datasets and 5 representative models, each utilizing unique visual and linguistic features and employing different inductive biases.",
"Interpreting and Analyzing Datasets.",
"Given the prevalence of large datasets in modern machine learning, it is critical to assess dataset properties to remove redundancies (Gururangan et al., 2018; Li and Vasconcelos, 2019) or biases (Torralba and Efros, 2011; Khosla et al., 2012; Bolukbasi et al., 2016), both of which negatively impact sample efficiency.",
"Prior work has used training dynamics to find examples which are frequently forgotten (Krymolowski, 2002; Toneva et al., 2019) versus those that are easy to learn (Bras et al., 2020).",
"This work suggests using two model-specific measures confidence and prediction variance as indicators of a training example's learnability (Chang et al., 2017; Swayamdipta et al., 2020).",
"Dataset Maps (Swayamdipta et al., 2020), a recently introduced framework uses these two measures to profile datasets to find learnable examples.",
"Unlike prior datasets analyzed by Dataset Maps that have a small number of global outliers as hard examples, we discover that VQA datasets contain copious amounts of collective outliers, which are difficult or even impossible for models to learn.",
"We adopt the standard pool-based active learning setup from prior work (Lewis and Gale, 1994; Settles, 2009; Gal et al., 2017; Lin and Parikh, 2017), consisting of a model M , initial seed set of labeled examples ( x i , y i ) D seed used to initialize M , an unlabeled pool of data D pool , and an acquisition function A ( x, M ) .",
"We run active learning over a series of acquisition iterations Pool Size # Answers VQA-Sports 5,411 [5k] 20 VQA-Food 4,082 [4k] 20 VQA-2 411,272 [400k] 3130 GQA 943,000 [900k] 1842 Table 1: We evaluate active learning on 4 VQA datasets.",
"T where at each iteration we acquire a batch of B new examples per: x D pool to label per x = arg max x D pool A ( x, M ) .",
"Acquiring an example often refers to using an oracle or human expert to annotate a new example with a correct label.",
"We follow prior work to simulate an oracle using existing datasets, forming D seed from a fixed percentage of the full dataset, and using the remainder as D pool (Gal et al., 2017; Lin and Parikh, 2017; Siddhant and Lipton, 2018).",
"We re-train M after each acquisition iteration.",
"Prior work has noted the impact of seed set size on active learning performance (Lin and Parikh, 2017; Misra et al., 2018; Jedoui et al., 2019).",
"We run multiple active learning evaluations with varying seed set sizes (ranging from 5% to 50% of the full pool size).",
"We keep the size of each acquisition batch B to a constant 10% of the overall pool size.",
"Visual Question Answering (VQA) requires reasoning over two modalities: images and text.",
"Most models use feature backbones (e.g., features from object recognition models pretrained on Ima-geNet, and pretrained word vectors for text).",
"For image features we use grid-based features from ResNet-101 (He et al., 2016), or object-based features from Faster R-CNN (Ren et al., 2015b) fine-tuned on Visual Genome (Anderson et al., 2018a).",
"We evaluate with a representative sample of existing VQA models, including the following: 2 LogReg is a logistic regression model that uses either ResNet-101 or Faster R-CNN image features with mean-pooled GloVe question embeddings (Pennington et al., 2014).",
"Although these models 2 Key implementation details can be found in the appendix.",
"In the interest of full reproducibility and further work in active learning and VQA, we release our code and results here: https://github.com/siddk/vqa-outliers .",
"are not as performant as the subsequent models, logistic regression has been effective on VQA (Suhr et al., 2019), and is pervasive in the active learning literature (Schein and Ungar, 2007; Yang and Loog, 2018; Mussmann and Liang, 2018).",
"LSTM-CNN is a standard model introduced with VQA-1 (Agrawal et al., 2015).",
"We use more performant ResNet-101 features instead of the original VGGNet features as our visual backbone.",
"BUTD (Bottom-Up Top-Down Attention) uses object-based features in tandem with attention over objects (Anderson et al., 2018a).",
"BUTD won the 2017 VQA Challenge (Teney et al., 2018), and has been a consistent baseline for recent work in VQA.",
"LXMERT is a large multi-modal transformer model that uses BUTD's object features and con-textualized BERT (Devlin et al., 2019) language features (Tan and Bansal, 2019).",
"LXMERT is pretrained on a corpus of aligned image-and-textual data spanning MS COCO, Visual Genome, VQA-2, NLVR-2, and GQA (Lin et al., 2014; Krishna et al., 2017; Goyal et al., 2017; Suhr et al., 2019; Hudson and Manning, 2019), initializing a cross-modal representation space conducive to fine-tuning.",
"3 3.2 Acquisition Functions Several active learning methods have been developed to account for different aspects of the machine learning training pipeline: while some acquire examples with high aleotoric uncertainty (Settles, 2009) (having to do with the natural uncertainty in the data) or epistemic uncertainty (Gal et al., 2017) (having to do with the uncertainty in the modeling/learning process), others attempt to acquire examples that reflect the distribution of data in the pool (Sener and Savarese, 2018).",
"We sample a diverse set of these methods: Random Sampling serves as our baseline passive approach for acquiring examples.",
"3 Results for LXMERT in Tan and Bansal (2019) are reported after pretraining on training and validation examples from the VQA datasets we use.",
"While this is fair if the goal is optimizing for test performance, this exposure to training and validation examples leaks important information; to remedy this, we obtained a model checkpoint from the LXMERT authors trained without VQA data.",
"This is also why our LXMERT results are lower than the numbers reported in the original paper however, the general boost provided by cross-modal pretraining holds.",
"Entropy acquires examples with the highest entropy in the model's output (Settles, 2009).",
"MC-Dropout Entropy (Monte-Carlo Dropout with Entropy acquisition) acquires examples with high entropy in the model's output averaged over multiple passes through a neural network with different dropout masks (Gal and Ghahramani, 2016).",
"This process is a consequence of a theoretical casting of dropout as approximate Bayesian inference in deep Gaussian processes.",
"BALD (Bayesian Active Learning by Disagreement) builds upon Monte-Carlo Dropout by proposing a decision theoretic objective; it acquires examples that maximise the decrease in expected posterior entropy (Houlsby et al., 2011; Gal et al., 2017; Siddhant and Lipton, 2018) capturing disagree-ment across different dropout masks.",
"Core-Set Selection samples examples that capture the diversity of the data pool (Sener and Savarese, 2018; Coleman et al., 2020).",
"It acquires examples to minimize the distance between an example in the unlabeled pool to its closest labeled example.",
"Since Core-Set selection operates over a representation space (and not an output distribution, like prior strategies) and VQA models operate over two modalities, we employ three Core-Set variants: Core-Set (Language) and Core-Set (Vision) operate over their respective representation spaces while Core-Set (Fused) operates over the fused vision and language representation space.",
"We evaluate the 8 active learning strategies across the 5 models described in the previous section.",
"Figures 25 show a representative sample of active learning results across datasets.",
"Due to space constraints, we only visualize 4 active learning strategies Least-Confidence, BALD, CoreSet-Fused, and the Random Baseline using 3 models (LSTM-CNN, BUTD, LXMERT).",
"4 Results and trends are consistent across the different acquisition functions, models and seed set sizes (see the appendix for results with other models, acquisition functions, and seed set sizes).",
"We now go on to provide descriptions of the datasets we evaluate against, and the corresponding results.",
"4 For LXMERT, running Core-Set selection is prohibitive, so we omit these results; please see Appendix B for more details.",
"One complexity of VQA is the size of the output space and the number of examples present (Agrawal et al., 2015; Goyal et al., 2017); VQA-2 has 400k training examples, and in excess of 3k possible answers (see Table 1).",
"However, prior work in active learning focuses on smaller datasets like the 10-class MNIST dataset (Gal et al., 2017), binary classification (Siddhant and Lipton, 2018), or small-cardinality ( 20 classes) text categorization (Lowell et al., 2019).",
"To ensure our results and conclusions are not due to the size of the output space, we build two meaningful, but narrow-domain VQA datasets from subsets of VQA-2.",
"These simplified datasets reduce the complexity of the underlying learning problem and provide a fair comparison to existing active learning literature.",
"VQA-Sports.",
"We generate VQA-Sports by compiling a list of 20 popular sports (e.g., soccer, football, tennis, etc.) in VQA-2, and restricting the set of questions to those with answers in this list.",
"We picked the sports categories by ranking the GloVe vector similarity between the word sports to answers in VQA-2, and selected the 20 most commonly occurring answers.",
"VQA-Food.",
"We generate the VQA-Food dataset similarly, compiling a list of the 20 commonly occurring food categories by GloVe vector similarity to the word food.",
"Results.",
"Figure 2 presents results for VQA-Sports, with an initial seed set restricted to 10% of the total pool (500 examples).",
"The appendix reports similar results on VQA-Food.",
"For LSTM-CNN, Least-Confidence appears to be slightly more sample efficient, while all other strategies perform on par with or worse than random.",
"For BUTD, all methods are on par with random; for LXMERT, they perform worse than random.",
"Generally on VQA-Sports, active learning performance varies, but fails to outperform random acquisition.",
"VQA-2 is the canonical dataset for evaluating VQA models (Goyal et al., 2017).",
"In keeping with prior work (Anderson et al., 2018a; Tan and Bansal, 2019), we filter the training set to only include answers that appear at least 9 times, resulting in 3130 unique answers.",
"Unlike traditional VQA-2 evaluation, which treats the task as a multi-label binary classification problem, we follow prior active learning work on VQA (Lin and Parikh, 2017), which formulates it as a multi-class classification problem, enabling the use of acquisition functions such as uncertainty sampling and BALD.",
"Results.",
"Figures 3 and 4 show results on VQA-2 with different seed set sizes 10% (40k examples) and 50% (200k examples).",
"Active learning performs relatively better with larger seed sets but still underperforms random.",
"Surprisingly, when initialized with 50% of the pool as the seed set, the gain in validation accuracy after acquiring the entire pool of examples (400k examples total) is only 2%.",
"This is an indication that the lack of sample efficiency might be a result of the underlying data, a problem we explore in the next section.",
"GQA was introduced as a means for evaluating compositional reasoning (Hudson and Manning, 2019).",
"Unlike VQA's natural human-written questions, GQA contains synthetic questions of the form what is inside the bottle the glasses are to Underspecification:What is on the shelf? Multi-hop reasoning: What is the vehicle that is driving down the road the box is on the side of? GQAVQA2 External knowledge: What does the symbol on the blanket mean? OCR:What is the first word on the black car? Figure 7: Example groups of collective outliers in the VQA-2 and GQA datasets. the right of?.",
"Results.",
"Figure 5 shows results on GQA using a seed set of 10% of the full pool (90k examples).",
"Despite its notable differences in question structure to VQA-2, active learning still performs on par with or slightly worse than random.",
"The previous section shows that active learning fails to improve over random acquisition on VQA across models and datasets.",
"A simple question remains why ?",
"One hypothesis is that sample ineffi-ciency stems from the data itself: there is only a 2% gain in validation accuracy when training on half versus the whole dataset.",
"Working from this, we characterize the underlying datasets using Dataset Maps (Swayamdipta et al., 2020) and discover that active learning methods prefer sampling hard-to-learn examples, leading to poor performance.",
"Mapping VQA Datasets.",
"A Dataset Map (Swayamdipta et al., 2020) is a model-specific graph for profiling the learnability of individual training examples.",
"Dataset Maps present holistic pictures of classification datasets relative to the training dynamics of a given model; as a model trains for multiple epochs and sees the same examples repeatedly, the mapping process logs statistics about the confidence assigned to individual predictions.",
"Maps then visualize these statistics against two axes: the y-axis plots the average model confidence assigned to the correct answer over training epochs, while the x-axis plots the spread, or variability, of these values.",
"This introduces a 2D representation of a dataset (viewed through its relationship with individual model) where examples are placed on the map by coarse statistics describing their learnability. We show the Dataset Map for BUTD trained on VQA-2 in Figure 1. For our work, we build this map post-hoc, training on the entire pool as a means for analyzing what active learning is doing treating it as a diagnostic tool for identifying the root cause why active learning seems to fail for VQA. In an ideal setting, the majority of examples in the training set should lie in the upper half of the graph i.e., the mean confidence assigned to the correct answer should be relatively high. Examples towards the upper-left side represent the easy-to-learn examples, as the variability in the confidence assigned by the model over time is fairly low.",
"A curious feature of VQA-2 and other VQA datasets is the presence of the 25-30% of examples in the bottom-left of the map (shown in red in Figure 1) examples that have low confidence and variability.",
"In other words, models are unable to learn a large proportion of training examples.",
"While prior work attributes examples in this quadrant to labeling errors (Swayamdipta et al., 2020), labeling errors in VQA are sparse, and cannot account for the density of such examples in these maps.",
"Interpreting Acquisitions.",
"We profile the acquisitions made by each active learning method, con-textualizing the acquired examples via their placement on the associated Dataset Map.",
"We segregate training examples into four buckets using the map's y-axis: easy ( 0 . 75 ), medium ( 0 . 50 ), hard ( 0 . 25 ), and impossible ( 0 . 00 ).",
"Ideally, active learning should be robust to hard-to-learn examples, focusing instead on learnable, high uncertainty examples towards the upper-right portion of the Dataset Map.",
"Instead, we find that active learning methods acquire a large proportion of impossible examples early on and concentrate on the easier examples only after the impossible examples dwindle (see Figure 6).",
"In contrast, the random baseline acquires examples proportional to each bucket's density in the underlying map; acquiring easier examples earlier and performing on par with or better than all others.",
"This leaves two questions: 1) can we characterize these hard examples, and 2) are these examples responsible for the ineffectiveness of active learning on VQA?",
"We first identify hard-to-learn examples as collective outliers and explain why active learning methods prefer to acquire them.",
"Next, we perform ablation experiments, removing these outliers from the active learning pool iteratively, and demonstrate a corresponding boost in sample efficiency relative to random acquisition.",
"Hard Examples are Collective Outliers.",
"Collective outliers are groups of examples that deviate from the rest of the examples but cluster together (Han and Kamber, 2000) they often present as fundamental subproblems of a broader task.",
"For instance (Figure 7), in VQA-2, we identify clusters of hard-to-learn examples that require optical character recognition (OCR) for reasoning about text (e.g., What is the first word on the black car?); another cluster requires external knowledge to answer (What is the symbol on the hood often associated with?).",
"In GQA, we identify different clusters of collective outliers; one cluster stems from innate underspecification (e.g., what is on the shelf? with multiple objects present on the shelf); another cluster requires multiple reasoning hops difficult for current models (e.g., What is the vehicle that is driving down the road the box is on the side of?).",
"We sample 100 random hard-to-learn examples from both VQA-2 and GQA and find that 100% of the examples belong to one of the two aforementioned collectives.",
"Since hard-to-learn examples constitute 2530% of the data pool, active learning methods cannot avoid them.",
"Uncertainty-based methods (e.g., Least-Confidence, Entropy, Monte-Carlo Dropout) identify them as valid acquisition targets because models lack the capacity to correctly answer these examples, assigning low confidence and high uncertainty.",
"Disagreement-based methods (e.g., BALD) are similar; model confidence is generally low but high variance (lower middle/lower right of the Dataset Maps).",
"Finally, diversity methods (e.g., Core-Set selection) identify these examples as different enough from the existing pool to warrant acquisition, but fail to learn meaningful representations, fueling a vicious cycle wherein they continue to pick these examples.",
"Ablating Outliers.",
"To verify that collective outliers are responsible for the degradation of active learning performance, we re-run our experiments using active learning pools with varying numbers of outliers removed.",
"To remove these outliers, we sort and remove all examples in the data pool using the product of their model confidence and prediction variability (x and y-axis values of the Dataset Maps).",
"We systematically remove examples with a low product value and observe how active learning performance changes (see Figure 8).",
"We observe a 23x improvement in sample efficiency when removing 50% of the entire data pool, consisting mainly of collective outliers (Figure 8c).",
"This improvement decreases if we only remove 25% of the full pool (Figure 8b), and further degrades if we remove only 10% (Figure 8a).",
"This ablation demonstrates that active learning methods are more sample efficient than the random baseline when collective outliers are absent from the unlabelled pool.",
"This paper asks a simple question why does the modern neural active learning toolkit fail when applied to complex, open ended tasks?",
"While we focus on VQA, collective outliers are abundant in tasks such as natural language inference (Bow-man et al., 2015; Williams et al., 2018) and open-domain question answering (Kwiatkowski et al., 2019), amongst others.",
"More insidious is their na-ture; collective outliers can take multiple forms, requiring external domain knowledge or common-sense reasoning, containing underspecification, or requiring capabilities beyond the scope of a given model (e.g., requiring OCR ability).",
"While we perform ablations in this work removing collective outliers, demonstrating that active learning fails as collective outliers take up larger portions of the dataset, this is only an analytical tool; these outliers are, and will continue to be, pervasive in open-ended datasets and as such, we will need to develop better tools for learning (and performing active learning) in their presence.",
"Selective Classification.",
"One potential direction for future work is to develop systems that abstain when they encounter collective outliers.",
"Historical artificial intelligence systems, such as SHRDLU (Winograd, 1972) and QUALM (Lehnert, 1977), were designed to flag input sequences that they were not designed to parse.",
"Ideas from those methods can and should be resurrected using modern techniques; for example, recent work suggests that a simple classifier can be trained to identify out-of-domain data inputs, provided a seed out-of-domain dataset (Kamath et al., 2020).",
"Active learning methods can be augmented with a similar classifier, which re-calibrates active learning uncertainty scores with this classifier's predictions.",
"Other work learns to identify novel utterances by learning to intelligently set thresholds in representation space (Karamcheti et al., 2020), a powerful idea especially if combined with other representation-centric active learning methods like Core-Set Sampling (Sener and Savarese, 2018).",
"Another direction for future work to explore is to leverage Dataset Maps to perform more global, holistic reasoning over datasets, to intelligently identify promising examples in a sense, baking part of the analysis done in this work directly into the active learning algorithms.",
"A possible instantiation of this idea would be in training a discriminator to differentiate between learnable examples (upper half of each Dataset Map) from the unlearnable, collective outliers with low confidence and low variability.",
"Between each active learning acquisition iteration, one can generate an updated Dataset Map, thereby reflecting what models are learning as they obtain new labeled examples.",
"Machine learning systems deployed in real-world settings will inevitably encounter open-world datasets, ones that contain a mixture of learnable and unlearnable inputs.",
"Our work provides a framework to study when models encounter such inputs.",
"Overall, we hope that our experiments serve as a catalyst for future work on evaluating active learning methods with inputs drawn from open-world datasets.",
"All code for data preprocessing, model implementation, and active learning algorithms is made available at https://github.com/siddk/vqa-outliers .",
"Additionally, this repository also contains the full set of results and dataset maps as well.",
"The authors are fully committed to maintaining this repository, in terms of both functionality and ease of use, and will actively monitor both email and Github Issues should there be problems.",
"We thank Kaylee Burns, Eric Mitchell, Stephen Mussman, Dorsa Sadigh, and our anonymous ACL reviewers for their useful feedback on earlier versions of this paper.",
"We are also grateful to Hao Tan for providing us with the LXMERT checkpoint trained without access to VQA datasets, as well as for general LXMERT fine-tuning pointers.",
"Siddharth Karamcheti is graciously supported by the Open Philanthropy Project AI Fellowship.",
"Christopher D. Manning is a CIFAR Fellow."
] | [
"abstain",
"objective",
"method",
"result",
"result",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Knowledge-intensive tasks such as question answering often require assimilating information from different sections of large inputs such as books or article collections.",
"We propose READTWICE 1 , a simple and effective technique that combines several strengths of prior approaches to model long-range dependencies with Transformers.",
"The main idea is to read text in small segments, in parallel, summarizing each segment into a memory table to be used in a second read of the text.",
"We show that the method outperforms models of comparable size on several question answering (QA) datasets and sets a new state of the art on the challenging NarrativeQA task, with questions about entire books.",
"Transformer-based models such as BERT are very effective in capturing long-range dependencies in text passages through the attention mechanism (Vaswani et al., 2017; Devlin et al., 2019).",
"However, the amount of compute in attention depends quadratically on the number of tokens in an input text passage.",
"As such, the standard BERT implementation limits input size to a fixed number (often 512) of tokens.",
"In reality, dependencies over significantly longer ranges are common and modeling them is crucial.",
"For instance, in a sentence like Inside the Sammath Naur, the Ring-bearer struggled to throw the Ring into the volcano , the narrative interweaves several prior storylines from a book.",
"Comprehending this sentence therefore requires looking up previous Work is done while at Google On leave from University of Southern California ([email protected]) 1 Source code and pre-trained checkpoints for READTWICE can be found at https://goo.gle/ research-readtwice .",
"Several methods have been proposed to address this challenge; see (Tay et al., 2020) for a survey and 3 for a detailed discussion.",
"One popular strategy is to reduce the number of tokens attended to.",
"Longer inputs can in fact be processed in this way but only up to a limit of around 5,000 tokens, as used in (Ainslie et al., 2020; Zaheer et al., 2020; Beltagy et al., 2020) far below the context sizes required to model long documents such as books.",
"Another strategy such as HIBERT (Zhang et al., 2019) splits inputs into smaller segments which are processed individually, then assembled into a hierarchical representation.",
"As a downside, inter-segment context is unavailable during encoding.",
"We propose READTWICE , a simple approach that combines the strengths of both strategies.",
"As its name suggests, the main idea is to process the input twice: a long text input (such as a document, or even a book) is treated as a collection of shorter text segments which are read independently and in parallel.",
"Then, the encoder reads each segement again, now augmented with compressed information from other segments.",
"The crucial component in READTWICE , as illustrated in Figure 1, is a memory module that holds compressed information from all segments.",
"That compressed information is used only once : in the second pass.",
"Thus, READTWICE is much more computationally efficient than models like ETC that rely on memory for all segments, in every layer.",
"While READTWICE requires two passes, it differs from hierarchical models such as HIBERT that do not condition segment encoding on other segments.",
"3 contrasts these approaches in more detail.",
"We validate the efficacy of READTWICE on extractive question answering (QA) tasks, showing strong performance on HotpotQA (Yang et al., 2018), TriviaQA (Joshi et al., 2017) and Narra-Figure 1: READTWICE model architecture.",
"The input is processed twice, with a memory table for inter-segment information sharing.",
"tiveQA (Kocisk et al., 2018).",
"In particular, READTWICE significantly improves the state-of-the-art on QA based on entire books in NarrativeQA, with absolutes gains of 4.5 ROUGE-L points and 3 BLEU-1 points (relative improvements of 23% and 17%, respectively).",
"The model reads a large text document split into N segments x 1 , . . . , x N ; each x i is limited to 512 tokens, as in a typical BERT model.",
"The model architecture is depicted in Figure 1.",
"In the first read, each segment is encoded independently with standard BERT.",
"Then, memories are extracted from each segmenta process we describe in detail laterand gathered into a global memory pool.",
"For the second read, a MemoryAttention layer (with a residual connection and a LayerNorm on top) is first used to merge the information from the former intra-segmental contextual token embeddings and the global memory.",
"The merged result is then read by another small BERT model with only two Transformer layers to produce the final output.",
"The rationale is that the first read already generates rich contextualized embeddings, and the second read only needs to incorporate information from the memory.",
"More formally: H 0 i = TokenEmbed ( x i ) , H 1 i = BERT 1 ( x i ) , i M i = ExtractMemories ( H 1 i ) , i M = Gather ([ M 1 , . . . , MN ]) H 2 i = MemoryAttention ( H 1 i , M ) , i H 3 i = LayerNorm ( H 1 i + H 2 i ) , i H 4 i = BERT 2 ( H 3 i ) , i Next, we describe the newly introduced layers.",
"ExtractMemories and Gather Our aim is to compress the information in each segment and disseminate it to other segments to be used in the second read.",
"We consider three types of memories: READTWICE ( CLS ).",
"One obvious choice is to use the CLS token representation associated with segment x i as a summary of the segment.",
"READTWICE ( STS ).",
"To obtain more fine-grained memories, we extract a memory vector for each consecutive span of 32 tokens.",
"Contextual embeddings of each span's first and the last tokens are concatenated and linearly projected to a single point in the token vector space as the span representation.",
"The projection matrix is learned end to end.",
"READTWICE ( E ).",
"In another variant of span-based memory, we memorize representations of entity mention spans.",
"To obtain these spans, we first annotate each segment with an external Named Entity Recognition system.",
"Then, each entity mention span is encoded in the same way as in READTWICE ( STS ).",
"This design is motivated by the intuition that long-range dependencies primarily occur between entities.",
"Empirically, we find that READTWICE ( E ) leads to best performance (see the ablation in Section 4.4) and it is the memory type used in our headline results.",
"We collect all memories from all segments into a flat memory table.",
"The table size is given by the number of segments ( CLS ), the number of 32-token spans ( STS ), or the number of entity mentions ( E ).",
"MemoryAttention In this layer, we let contextual token embeddings from individual segments interact with other segments' memories via dot-product attention over the memory table.",
"where M 0 is a learnable no-op memory not associated with any specific text.",
"r i,m s is a learned position score which captures the relative distance between segment i and the memory M m , akin to Shaw et al. (2018): r i,m s = ( dist ( i, m s )) (2) where is a set of weights indexed by the distance dist ( i, m s ) = B i m s < B B i m s > B i m s otherwise (3) where the cutoff threshold B clips the effect of distance to [ B, B ] .",
"We set B to 10 in this work.",
"Finally, the MemoryAttention layer output for a given token is given by h 2 ij = (cid:88) m =1 m M m (4) 2.2 Pre-training We pretrain READTWICE similarly to (Devlin et al., 2019), using the Wikipedia and BooksCor-pus datasets.",
"When entity mentions are used in the memory table, the texts are processed with the Entity Linking (EL) and Named Entity Recognition (NER) tools from the Google Cloud NLP API 2 .",
"Moreover, we use existing hyperlinks in Wikipedia as additional entity annotations.",
"The first and the second BERT readers are trained end-to-end.",
"Our pre-training objective is the standard Masked Language Model (MLM) task, with the MLM prediction loss computed based on the output of the second reader.",
"In order to encourage the model to rely on the memory, we increase the difficulty of the MLM task.",
"Following the entity masking procedure in (Guu et al., 2020; Sun et al., 2019), we mask entity mention tokens more aggressively at a 25% rate and jointly mask all tokens within a mention.",
"By contrast, for non-entity tokens, we mask contiguous sequences of random length at a 15% rate.",
"One way to extend the limit on input size is by reducing the number of tokens attended to.",
"ETC (Ainslie et al., 2020) and LONGFORMER (Belt-agy et al., 2020) allow standard attention only between tokens within a fixed distance.",
"To allow information flow over longer distances, they use auxiliary global \"memory\" tokens which attend to all regular tokens and vice versa.",
"BIGBIRD (Zaheer et al., 2020) additionally has each token attend to a random subset of other tokens.",
"While reducing asymptotic complexity from quadratic to linear (in input size), these global tokens are added at each attention layer, incurring a high computational cost.",
"Another approach is to split the input into multiple segments and then aggregate information across segments.",
"This is achieved through hierarchical modeling (Chang et al., 2019; Zhang et al., 2019).",
"While reducing the attention size to the number of segments, each individual segment has no information about its siblings during token-level encoding.",
"Alternatively, recurrent models (Dai et al., 2019; Rae et al., 2019) read a large input from left to right, dynamically compressing faraway contexts, thus allowing unidirectional information aggregation (left to right).",
"One disadvantage is that the input needs to be processed sequentially, which becomes time-consuming for producing contextualized representations of a large input.",
"Our method brings these lines of work together.",
"Processing segments independently and in parallel, then memorizing their compressed representations and sharing memory across segments enables contextual embeddings to be updated based on faraway information.",
"Enabling memory sharing only once during the second readallows it be done cheaply.",
"Note that the memory module here is internally generated from the input, as opposed to external memory models which are orthogonal to our approach (Peters et al., 2019; Fvry et al., 2020).",
"All READTWICE models are initialized with the public ROBERTA (base) checkpoint 3 adapted to Tensorflow by Rothe et al. (2020).",
"Further, models are pre-trained for 1M steps on 64 TPU cores using the LAMB optimizer (You et al., 2020).",
"Each batch contains 512 segments, with at most 128 segments per document.",
"The segments are consecutive spans of 512 tokens.",
"Therefore, the model can process documents up to 65k ( 128 512 ) tokens.",
"Each batch contains the maximum number of documents such that the total number of segments is at most 512.",
"Approximately half of Wikipedia articles fit in one segment (thus not needing memory), with a fat tail of longer documents.",
"In terms of compute and memory overhead, READTWICE is about 30% slower than the ROBERTA -base model and uses 15M (or 12%) more parameters: 14M owing to the second read BERT 2 and 1M due to ExtractMemories and MemoryAttention layers.",
"We evaluate READTWICE on the downstream extractive question-answering task using several datasets: HotpotQA (HQA) (Yang et al., 2018), TriviaQA (TQA) (Joshi et al., 2017) and NarrativeQA (NQA) (Kocisk et al., 2018).",
"In HQA, questions are based on relatively short text passages (2 evidence paragraphs), with eight additional distractor passages.",
"In TQA, evidence text is medium-sized.",
"NQA asks questions about entire books, requiring a successful QA system to model very long-range dependencies.",
"The NQA dataset has an average of 62,000 words per document with a maximum of 400,000.",
"Only 40% of NQA's answers are span-based we use a ROUGE-L oracle as training labels for the other questions.",
"READTWICE is fine-tuned on each task.",
"QA-specific heads are used to generate span-based predictions, consisting of fully-connected layers that take contextual embeddings from the second reader as inputs.",
"These layers output a score for whether the corresponding tokens are the beginning or ending of an answer span.",
"For a similar setup, see multi-segment based QA tasks (Clark and Gardner, 2018; Cheng et al., 2020).",
"During fine-tuning, batches contain 128 segments for all tasks (also with up to 128 segments per document).",
"Every segment contains 512 tokens, but as neighboring segments have 128 token overlaps, the model can process documents of up to 49K tokens ( 128 (512 128) ).",
"For TQA and HQA, documents have approximately 10 segments.",
"For NQA, we split the documents into sub-documents with 49k tokens and apply memory only within these sub-documents.",
"Moreover, we use early stopping based on the performance on the development set.",
"Results for HQA and TQA are reported in Table 1.",
"We compare to prior art (using reported results where available or from our own implementations otherwise, denoted as us): Longformer (LF) (Beltagy et al., 2020), ETC (Ainslie et al., 2020), BigBird (Zaheer et al., 2020), and ROBERTA (Liu et al., 2019).",
"By default, we compare against the base configuration of those models where the number of parameters is comparable to BERT-Base, as is the case for READTWICE . Table 1 shows that for small to medium sized text passages, the proposed READTWICE outperforms all models of comparable size. Table 2 contrasts READTWICE to other methods on extremely large contexts: BiDAF (Kocisk et al., 2018), R 3 (Wang et al., 2018), BM25 + BERT Reader / Ranker (Mou et al., 2020) and our own implementation of ROBERTA and ETC 5 . READTWICE significantly outperforms all previous work and establishes new state-of-the-art results, demonstrating the effectiveness of performing a second read conditioned on global memory for processing extremely long texts. 4.4 Ablation Analysis & Discussion To isolate individual components' contributions, Table 3 contrasts several variants of READTWICE . 4 See https://competitions.codalab.org/ competitions/17208#results , tab Wikipedia.",
"5 For ETC we use the public (base configuration) checkpoint https://storage.googleapis.com/ gresearch/etcmodel/checkpoints/etc_base_2x_pretrain.zip Model ROUGE-L BLEU-1 BLEU-4 METEOR BiDAF (Kocisk et al., 2018) 6.3 / 6.2 5.8 / 5.7 0.2 / 0.3 3.8 / 3.7 R 3 (Wang et al., 2018) 11.4 / 11.9 16.4 / 15.7 0.5 / 0.5 3.5 / 3.5 BM25+BERT (Mou et al., 2020) 14.8 / 15.5 14.6 / 14.5 1.8 / 1.4 5.1 / 5.0 ROBERTA (us) 17.4 / 18.0 18.2 / 18.0 2.4 / 2.6 5.4 / 5.4 ETC (us) 18.3 / 18.8 16.1 / 17.2 2.4 / 2.7 5.4 / 5.4 READTWICE ( E ) 22.7 / 23.3 21.1 / 21.1 3.6 / 4.0 6.7 / 7.0 Table 2: Results on the NarrativeQA's development / test splits.",
"Inter-segment memory matters We introduce a variant READTWICE -E(SS) (where SS stands for Single Segment) to isolate the gains from the memory layer.",
"READTWICE -E(SS) prevents segments from attending to memories of other segments, thus disabling long-range dependency modeling.",
"We observe that READTWICE-E improves over READTWICE -E(SS) on all tasks, modestly but non-negligibly for TQA, and significantly for HQA and especially NQA.",
"This matches our knowledge of those datasets: TQA questions are based on a relatively short context and can typically be answered using a single passage in the context document.",
"HQA questions have a similarly sized context, but are explicitly constructed to require information from multiple paragraphs to answer, and READTWICE shows accordingly larger gains.",
"Finally, NQA has much larger contexts, and its questions generally require information from different parts of the document, increasing the importance of long-range dependency modeling and accordingly, the performance boost from READTWICE .",
"Entities matter Entity mentions appears to be the most effective memory type in most experiments, leading to noticeably improved performance on both HQA and NQA.",
"The difference is most pronounced in NQA whose particularly long and challenging contexts make it a perfect testbed.",
"Source of non-memory gains The non-memory gains over a baseline ROBERTA model originate from the two extra layers and the entity-based MLM objective.",
"In order to disentangle the sources of gains we train the READTWICE -E(SS) model using a 10-layer Transformer for BERT 1 (denoted as E ( SS , 10L ) in Table 3), with the same number of layers as ROBERTA .",
"While the gains from 2 extra layers are significant ( E ( SS ) vs E ( SS , 10L ) ), most of the gains appear to result from the custom Model HQA NQA -R NQA -B TQA E 75.89 22.71 21.07 80.7 E ( SS ) 75.08 21.93 18.39 80.3 E ( SS , 10L ) 74.70 21.39 18.37 80.4 ROBERTA 72.00 17.40 18.2 75.9 CLS 75.32 20.89 17.80 80.6 STS 75.39 21.08 18.38 80.4 Table 3: Ablation studies on variants of READTWICE on the dev sets.",
"READTWICE performs well on several QA tasks, particularly NarrativeQA where long-range dependencies among entities appear to be very important.",
"The proposed method is conceptually simple, easy to implement and is capable of reading entire books.",
"For future work, we plan to explore new memory types, hierarchies and aggregation functions.",
"We also aim to apply the model to other tasks, particularly long text summarization, likely to benefit from a memory-forming mechanism.",
"We thank Santiago Ontanon, Manzil Zaheer, Sudeep Gandhe, Anirudh Ravula, Bhargav Kana-gal, Jules Gagnon-Marchand and Sumit Sanghai for insightful discussions, Mou et al. (2020) for a sample evaluation code for NarrativeQA and reviewers for their feedback.",
"This work is partially supported by NSF Awards IIS-1513966/ 1632803/1833137, CCF-1139148, DARPA Awards#: FA8750-18-2-0117, FA8750-19-1-0504, DARPA-D3M Award UCB-00009528, Google Research Awards, gifts from Facebook and Netflix, and ARO# W911NF-12-1-0241 and W911NF-15-1-0484."
] | [
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other"
] |
[
"Neural models have achieved great success on machine reading comprehension (MRC), many of which typically consist of two components: an evidence extractor and an answer predictor.",
"The former seeks the most relevant information from a reference text, while the latter is to locate or generate answers from the extracted evidence.",
"Despite the importance of evidence labels for training the evidence extractor, they are not cheaply accessible, particularly in many non-extractive MRC tasks such as YES/NO question answering and multi-choice MRC.",
"To address this problem, we present a Self-Training method (STM), which supervises the evidence extractor with auto-generated evidence labels in an iterative process.",
"At each iteration, a base MRC model is trained with golden answers and noisy evidence labels.",
"The trained model will predict pseudo evidence labels as extra supervision in the next iteration.",
"We evaluate STM on seven datasets over three MRC tasks.",
"Experimental results demonstrate the improvement on existing MRC models, and we also analyze how and why such a self-training method works in MRC.",
"Machine reading comprehension (MRC) has received increasing attention recently, which can be roughly divided into two categories: extractive and non-extractive MRC.",
"Extractive MRC requires a model to extract an answer span to a question from reference documents, such as the tasks in SQuAD (Rajpurkar et al., 2016) and CoQA (Reddy et al., 2019).",
"In contrast, non-extractive MRC infers answers based on some evidence in reference Equal contribution Corresponding author: Minlie Huang.",
"documents, including Yes/No question answering (Clark et al., 2019), multiple-choice MRC (Lai et al., 2017; Khashabi et al., 2018; Sun et al., 2019), and open domain question answering (Dhingra et al., 2017b).",
"As shown in Table 1, evidence plays a vital role in MRC (Zhou et al., 2019; Ding et al., 2019; Min et al., 2018), and the coarse-to-fine paradigm has been widely adopted in multiple models (Choi et al., 2017; Li et al., 2018; Wang et al., 2018) where an evidence extractor first seeks the evidence from given documents and then an answer predictor infers the answer based on the evidence.",
"However, it is challenging to learn a good evidence extractor due to the lack of evidence labels for supervision.",
"Manually annotating the golden evidence is expensive.",
"Therefore, some recent efforts have been dedicated to improving MRC by leveraging noisy evidence labels when training the evidence extractor.",
"Some works (Lin et al., 2018; Min et al., 2018) generate distant labels using hand-crafted rules and external resources.",
"Some studies (Wang et al., 2018; Choi et al., 2017) adopt reinforcement learning (RL) to decide the labels of evidence.",
"However, such RL methods suffer from unstable training.",
"More distant supervision techniques are also used to refine noisy labels, such as deep probability logic (Wang et al., 2019), but they are hard to transfer to other tasks.",
"Nevertheless, improving the evidence extractor remains challenging when golden evidence labels are not available.",
"In this paper, we present a general and effective method based on Self-Training (Scudder, 1965) to improve MRC with soft evidence extraction when golden evidence labels are not available.",
"Following the Self-Training paradigm, a base MRC model is iteratively trained.",
"At each iteration, the base model is trained with golden answers, as well as noisy evidence labels obtained at the preceding it-Q: Did a little boy write the note?",
"D: ...",
"This note is from a little girl .",
"She wants to be your friend.",
"If you want to be her friend, ...",
"A: No Q: Is she carrying something?",
"D: ...On the step, I find the elderly Chinese lady, small and slight, holding the hand of a little boy.",
"In her other hand, she holds a paper carrier bag.",
"...",
"A: Yes Table 1: Examples of Yes/No question answering.",
"eration.",
"Then, the trained model generates noisy evidence labels, which will be used to supervise evidence extraction at the next iteration.",
"The overview of our method is shown in Figure",
"1. Through this iterative process, the evidence is labeled automatically to guide the RC model to find answers, and then a better RC model benefits the evidence labeling process in return.",
"Our method works without any manual efforts or external information, and therefore can be applied to any MRC tasks.",
"Besides, the Self-Training algorithm converges more stably than RL.",
"Two main contributions in this paper are summarized as follows:",
"1. We propose a self-training method to improve machine reading comprehension by soft evidence labeling.",
"Compared with other existing methods, our method is more effective and general.",
"2. We verify the generalization and effectiveness of STM on several MRC tasks, including Yes/No question answering (YNQA), multiple-choice machine reading comprehension (MMRC), and open-domain question answering (ODQA).",
"Our method is applicable to different base models, including BERT and DSQA (Lin et al., 2018).",
"Experimental results demonstrate that our proposed method improves base models in three MRC tasks remarkably.",
"Early MRC studies focus on modeling semantic matching between a question and a reference document (Seo et al., 2017; Huang et al., 2018; Zhu",
"et al., 2018; Mihaylov and Frank, 2018).",
"In order to mimic the reading mode of human, hierarchical coarse-to-fine methods are proposed (Choi et al., 2017; Li et al., 2018).",
"Such models first read the full text to select relevant text spans, and then infer answers from these relevant spans.",
"Extracting such spans in MRC is drawing more and more attention, though still quite challenging (Wang et al., 2019).",
"Evidence extraction aims at finding evidential and relevant information for downstream processes in a task, which arguably improves the overall performance of the task.",
"Not surprisingly, evidence extraction is useful and becomes an important component in fact verification (Zhou et al., 2019; Yin and Roth, 2018; Hanselowski et al., 2018; Ma et al., 2019), multiple-choice reading comprehension (Wang et al., 2019; Bax, 2013; Yu et al., 2019), open-domain question answering (Lin et al., 2018; Wang et al., 2018), multi-hop reading comprehension (Nishida et al., 2019; Ding et al., 2019), natural language inference (Wang et al., 2017; Chen et al., 2017), and a wide range of other tasks (Nguyen and Nguyen, 2018; Chen and Bansal, 2018).",
"In general, evidence extraction in MRC can be classified into four types according to the training method.",
"First, unsupervised methods provide no guidance for evidence extraction (Seo et al., 2017; Huang et al., 2019).",
"Second, supervised methods train evidence extraction with golden evidence labels, which sometimes can be generated automatically in extractive MRC settings (Lin et al., 2018; Yin and Roth, 2018; Hanselowski et al., 2018).",
"Third, weakly supervised methods rely on noisy evidence labels, where the labels can be obtained by heuristic rules (Min et al., 2018).",
"Moreover, some data programming techniques, such as deep probability logic, were proposed to refine noisy labels (Wang et al., 2019).",
"Last, if a weak extractor is obtained via unsupervised or weakly supervised pre-training, reinforcement learning can be utilized to learn a better policy of evidence extraction (Wang et al., 2018; Choi et al., 2017).",
"For non-extractive MRC tasks, such as YNQA and MMRC, it is cumbersome and inefficient to annotate evidence labels (Ma et al., 2019).",
"Although various methods for evidence extraction have been proposed, training an effective extractor is still a challenging problem when golden evidence labels are unavailable.",
"Weakly supervised methods either suffer from the low performance or rely on too many external resources, which makes them difficult to transfer to other tasks.",
"RL methods can indeed train a better extractor without evidence labels.",
"However, they are much more complicated and unstable to train, and highly dependent on model pre-training.",
"Our method is based on Self-Training, a widely used semi-supervised method.",
"Most related studies follow the framework of traditional Self-Training (Scudder, 1965) and Co-Training (Blum and Mitchell, 1998), and focus on designing better policies for selecting confident samples.",
"Co-Trade (Zhang and Zhou, 2011) evaluates the confidence of whether a sample has been correctly labeled via a statistic-based data editing technique (Zighed et al., 2002).",
"Self-paced Co-Training (Ma et al., 2017) adjusts labeled data dynamically according to the consistency between the two models trained on different views.",
"A reinforcement learning method (Wu et al., 2018) designs an additional Q-agent as a sample selector.",
"The task of machine reading comprehension can be formalized as follows: given a reference document composed of a number of sentences D = { S 1 , S 2 , , S m } and a question Q , the model should extract or generate an answer A to this question conditioned on the document, formally as",
"The process can be decomposed into two components, i.e., an evidence extractor and an answer predictor.",
"The golden answer A is given for training the entire model, including the evidence extractor and the answer predictor.",
"Denote E i as a binary evidence label { 0 , 1 } for the i -th sentence S i , where 0 / 1 corresponds to the non-evidence/evidence sentence, respectively.",
"An auxiliary loss on the evidence labels can help the training of the evidence extractor.",
"The overview of our method is shown in Figure 1, which is an iterative process.",
"During training, two data pools are maintained and denoted as U (unlabeled data) and L (labeled data).",
"In addition to golden answers, examples in L are annotated with pseudo evidence labels.",
"In contrast, there are only golden answers provided in U .",
"At each iteration, the base model is trained on both data pools (two training arrows).",
"After training, the model makes evidence predictions on unlabeled instances (the labeling arrow), and then Selector chooses the most confident instances from U to provide noisy evidence labels.",
"In particular, the instances with newly generated evidence labels are moved from U to L (the moving arrow), which are used to supervise evidence extraction in the next iteration.",
"This process will iterate several times.",
"As shown in Figure 2, the overall structure of a base model consists of an encoder layer, an evidence extractor, and an answer predictor.",
"The encoder layer takes document D and question Q as input to obtain contextual representation for each word.",
"Denote h Di,j as the representation of the j -th word in S i , and h Qi as the representation of the i -th word in question Q .",
"Our framework is agnostic to the architecture of the encoder, and we show improvements on two widely used encoding models, i.e., Transformer (with BERT, Devlin et al., 2019) and LSTM (with DSQA, Lin et al., 2018) in the experiments.",
"The evidence extractor employs hierarchical attention, including tokenand sentence-level attention, to obtain the document representation h D .",
"Token-level attention obtains a sentence vector by self-attention (Vaswani et al., 2017) within the words in a sentence, as follows: h Di = | S i | (cid:88) j i,j h Di,j , i,j exp( FS ( h Q , h Di,j )) , s Di = | S i | (cid:88) j i,j h Di,j , i,j exp( w s h Di,j + b s ) , where h Q is the sentence representation of the question.",
"i,j refers to the importance of word j in sentence i , and so on for i,j .",
"w s and b s are learnable parameters.",
"The attention function FS follows the bilinear form (Kim et al., 2018).",
"Sentence-level attention identifies important sentences conditioned on the question in a soft way to get the summary vector ( h D ), as follows: h D = m (cid:88) i i h Di , i exp( FD ( h Q , s Di )) , where FD has the same bilinear form as FS with different parameters.",
"i refers to the importance of the corresponding sentence.",
"The answer predictor adopts different structures for different MRC tasks.",
"For Yes/No question answering, we use a simple linear classifier to infer answers.",
"For multiple-choice MRC, we use a Multiple Layer Perceptron (MLP) with Softmax to obtain the score of each choice.",
"And for open-domain question answering, one MLP is used to predict the answer start, and another MLP is used to predict the end.",
"We adopt two loss functions, one for task-specific loss and the other for evidence loss.",
"The task-specific loss is defined as the negative log-likelihood (NLL) of predicting golden answers, formally as follows: LA ( D, Q, A ) = log P ( A = A | D, Q ) , where A denotes the predicted answer and A is the golden answer.",
"When the evidence label E is provided, we can impose supervision on the evidence extractor.",
"For the most general case, we assume that a variable number of evidence sentences exist in each sample ( Q, A, D ) .",
"Inspired by the previous work (Nishida et al., 2019) that used multiple pieces of evidence, we calculate the evidence loss step by step.",
"Suppose we will extract K evidence sentences.",
"In the first step, we compute the loss of selecting the most plausible evidence sentence.",
"In the second step, we compute the loss in the remaining sentences, where the previously selected sentence is masked and not counted in computing the loss at the second step.",
"The overall loss is the average of all the step-by-step loss until we select out K evidence sentences.",
"In this manner, we devise a BP-able surrogate loss function for choosing the top K evidence sentences.",
"Formally, we have LE ( D, Q, E ) = 1 KK (cid:88) k =1 H ( D, Q, E, M k ) , where K is the number of evidence sentences, a pre-specified hyperparamter.",
"M k = { M k 1 , M k 2 , , M km } and each M ki { 0 , } is a sentence mask, where 0 means sentence i is not selected before step k , and means selected.",
"As M ki = for the previously selected sentences, the attention weight on those sentences will be zero, in other words, they are masked out.",
"Then, the step-wise loss can be computed as follows: H ( D, Q, E, M k ) = log max i ( ki E i ) , where ki indicates the attention weight for sentence i , and E i { 0 , 1 } is the evidence label for sentence i .",
"The sentence with the largest attention weight will be chosen as the k -th evidence sentence.",
"For each sentence i , M 1 i is initialized to be 0 .",
"At each step k ( k > 1) , the mask M ki will be set to if sentence i is chosen as an evidence sentence at the preceding step k 1 , and the mask remains unchanged otherwise.",
"Formally, the mask is updated as follows: M ki = (cid:40) i = argmax j ( k 1 j E j ) M k 1 i otherwise .",
"During training, the total loss L is the combination of the task-specific loss and the evidence loss: L = (cid:88) ( D,Q,A ) U LLA ( D, Q, A )+ (cid:88) ( D,Q,E ) LLE ( D, Q, E ) , (1) where is a factor to balance the two loss terms.",
"L and U denote the two sets in which instances with and without evidence labels, respectively.",
"Note that the evidence label in L is automatically obtained in our self-training method.",
"STM is designed to improve base MRC models via generating pseudo evidence labels for evidence extraction when golden labels are unavailable.",
"STM works in an iterative manner, and each iteration consists of two stages.",
"One is to learn a better base model for answer prediction and evidence labeling.",
"The other is to obtain more precise evidence labels for the next iteration using the updated model.",
"At each iteration, STM first trains the base model with golden answers and pseudo evidence labels from the preceding iteration using the total loss as defined Equation",
"1. Then the trained model can predict a distribution of pseudo evidence labels for each unlabelled instance ( D, Q, A ) , and decides E as E = argmin E (cid:48) LE ( D, Q, E (cid:48) ) .",
"(2) Define the confidence of a labelled instance ( D, Q, A, E ) as c ( D, Q, A, E ) = exp( LA ( D, Q, A )) exp( LE ( D, Q, E )) .",
"Selector selects the instances with the largest confidence scores whose LA ( D, Q, A ) and LE ( D, Q, E ) are smaller than the prespecified thresholds.",
"These labelled instances will be moved from U to L for the next iteration.",
"In the first iteration (iteration 0), the initial labeled set L is set to an empty set.",
"Thus the base model is supervised only by golden answers.",
"In this case, the evidence extractor is trained in a distant supervised manner.",
"The procedure of one iteration of STM is illustrated in Algorithm",
"1. and (cid:15) are two thresholds (hyper-parameters).",
"sort operation ranks the candidate samples according to their confidence scores s and returns the topn samples.",
"n varies different datasets, and details are presented in the appendix .",
"Algorithm 1 One iteration of STM Input: Training sets U, L ; Thresholds and (cid:15) ; Number of generated labels n ; Weight of evidence loss ; Output: Trained MRC model M ; Updated training sets U, L ; 1: Randomly initialize M ; 2: Train M on U and L ; 3: Initialize L (cid:48) = ; 4: for each ( D, Q, A ) U do 5: l A = LA ( D, Q, A ) ; 6: Generate E via Equation 2; 7: l E = LE ( D, Q, E ) ; 8: if l A , l E (cid:15) then 9: s = c ( D, Q, A, E ) ; 10: Add ( D, Q, A, E, s ) to L (cid:48) ; 11: end if 12: end for 13: L (cid:48) = sort ( L (cid:48) , n ) ; 14: L = L L (cid:48) , U = U \\ L (cid:48) ; 15: return M, U, L ; 3.5 Analysis To understand why STM can improve evidence extraction and the performance of MRC, we revisit the training process and present a theoretical explanation, as inspired by (Anonymous, 2020).",
"In Section 3.4, we introduce the simple labeling strategy used in STM.",
"If there is no sample selection, the evidence loss can be formulated as L t = E x p ( x ) EE p t 1 ( E | x ) log p t ( E | x ) , where x represents ( D, Q, A ) , and t is the parameter of the t -th iteration.",
"In this case, pseudo evidence labels E are randomly sampled from p t 1 ( E | x ) to guide p t ( E | x ) , and therefore minimizing L t will lead to t = t 1 .",
"As a matter of fact, the sample selection strategy in STM is to filter out the low-quality pseudo labels with two Model / Dataset CoQA MARCO BoolQ BERT-MLP 78.0 70.8 71.6 BERT-HA 78.8 71.3 72.9 BERT-HA+RL 79.3 70.3 70.4 BERT-HA+Rule 78.1 70.4 73.8 BERT-HA+STM 80.5 72.3 75.2 BERT-HA+Gold 82.0 N/A N/A Table 2: Classification accuracy on three Yes/No question answering datasets.",
"In STM, f is a filter function with two pre-specified thresholds, and (cid:15) .",
"g is defined as argmax (Equa-tion 2).",
"Compared with random sampling, our strategy tends to prevent t from learning wrong knowledge from t 1 .",
"And the subsequent training might benefit from implicitly learning the strategy.",
"In general, the strategy of STM imposes naive prior knowledge on the base models via the two distribution mappings, which may partly explain the performance gains.",
"CoQA (Reddy et al., 2019) is a multi-turn conversational question answering dataset where questions may be incomplete and need historical context to get the answers.",
"We extracted the Yes/No questions from CoQA, along with their histories, to form a YNQA dataset.",
"BoolQ (Clark et al., 2019) consists of Yes/No questions from the Google search engine.",
"Each question is accompanied by a related paragraph.",
"We expanded each short paragraph by concatenating some randomly sampled sentences.",
"MS MARCO (Nguyen et al., 2016) is a large MRC dataset.",
"Each question is paired with a set of reference documents, and the answer may not exist in the documents.",
"We extracted all Yes/No questions, and randomly picked some reference documents containing evidence 1 .",
"To balance the ratio of Yes 1 The evidence annotation in a document is provided by the and No questions, we randomly removed some questions whose answers are Yes.",
"RACE (Lai et al., 2017) consists of about 28,000 passages and 100,000 questions from English exams for middle (RACE-M) and high (RACE-H) schools of China.",
"The average number of sentences per passage in RACE-M and RACE-H is about 16 and 17, respectively.",
"DREAM (Sun et al., 2019) contains 10,197 multiple-choice questions with 6,444 dialogues, collected from English examinations.",
"In DREAM, 85% of the questions require reasoning with multiple evidential sentences.",
"MultiRC (Khashabi et al., 2018) is an MMRC dataset where the amount of correct options to each question varies from 1 to 10.",
"Each question in MultiRC is annotated with evidence from its reference document.",
"The average number of annotated evidence sentences for each question is 2.3.",
"Quasar-T (Dhingra et al., 2017b) consists of 43,000 open-domain trivial questions, whose answers were extracted from ClueWeb09.",
"For fair comparison, we retrieved 50 reference sentences from ClueWeb09 for each question the same as DSQA (Lin et al., 2018).",
"We compared several methods in our experiments, including some powerful base models without evidence supervision and some existing methods (*+Rule/RL/DPL/STM), which improve MRC with noisy evidence labels.",
"Experimental details are shown in the appendix.",
"YNQA and MMRC : (1) BERT-MLP utilizes a BERT encoder and an MLP answer predictor.",
"The predictor makes classification based on the BERT representation at the position of [CLS] .",
"The parameters of the BERT module were initialized from BERT-base.",
"(2) BERT-HA refers to the base model introduced in Section 3.2, which applies hierarchical attention over words and sentences.",
"(3) Based on BERT-HA, BERT-HA+Rule supervises the evidence extractor with noisy evidence labels, which are derived from hand-crafted rules.",
"We have explored three types of rules based on Jaccard similarity, integer linear programming original dataset.",
"(ILP) (Boudin et al., 2015), and inverse term frequency (ITF) (Wang et al., 2019), among which ITF performed best in most cases.",
"For simplicity, we merely provided experimental results with the rule of ITF.",
"(4) Based on BERT-HA, BERT-HA+RL trains the evidence extractor via reinforcement learning, similar to (Choi et al., 2017).",
"And (5) another deep programming logic (DPL) method, GPT+DPL (Wang et al., 2019), is complicated, and the source code is not provided.",
"Thus we directly used the results from the original paper and did not evaluate it on BERT.",
"ODQA : (1) For each question, DSQA (Lin et al., 2018) aggregates multiple relevant paragraphs from ClueWeb09, and then infers an answer from these paragraphs.",
"(2) GA (Dhingra et al., 2017a) and BiDAF (Seo et al., 2017) perform semantic matching between questions and paragraphs with attention mechanisms.",
"And (3) R 3 (Wang et al., 2018) is a reinforcement learning method that explicitly selects the most relevant paragraph to a given question for the subsequent reading comprehension module.",
"Table 2 shows the results on the three YNQA datasets.",
"We merely reported the classification accuracy on the development sets since the test sets are unavailable.",
"BERT-HA+STM outperformed all the baselines, which demonstrates the effectiveness of our method.",
"Compared with BERT-MLP, BERT-HA achieved better performance on all the three Model EM F1 GA (Dhingra et al., 2017a) 26.4 26.4 BiDAF (Seo et al., 2017) 25.9 28.5 R 3 (Wang et al., 2018) 35.3 41.7 DSQA (Lin et al., 2018) 40.7 47.6 +distant supervision 41.7 48.7 +STM 41.8 49.2 Table 4: Experimental results on the test set of Quasar-T.",
"datasets, indicating that distant supervision on evidence extraction can benefit Yes-No question answering.",
"However, compared with BERT-HA, BERT-HA+RL made no improvement on MARCO and BoolQ, possibly due to the high variance in training.",
"Similarly, BERT-HA+Rule performed worse than BERT-HA on CoQA and MARCO, implying that it is more difficult for the rule-based methods (inverse term frequency) to find correct evidence in these two datasets.",
"In contrast, our method BERT-HA+STM is more general and performed the best on all datasets.",
"BERT-HA+STM achieved comparable performance with BERT-HA+Gold, which stands for the upper bound by providing golden evidence labels, indicating that the effectiveness of noisy labels in our method.",
"Table 3 shows the experimental results on the three MMRC datasets.",
"We adopt the metrics from the referred papers.",
"STM improved BERT-HA consis-Model/Dataset CoQA MultiRC P@1 R@1 R@2 R@3 P@1 P@2 P@3 BERT-HA 20.0 28.2 49.8 62.5 62.3 55.2 46.6 +RL 5.2 10.5 22.3 32.9 24.0 25.3 24.7 +Rule 38.4 32.4 53.6 65.1 71.8 59.6 48.7 +STM (iter 1) 32.7 32.8 57.1 70.1 72.2 63.3 52.5 +STM (iter 2) 37.3 32.9 58.0 71.3 72.7 64.4 53.5 +STM (iter 3) 39.9 31.4 55.3 68.8 69.5 61.6 51.6 BERT-HA+Gold 53.6 33.7 59.5 73.4 74.5 65.9 54.8 Table 5: Evidence extraction evaluation on the development sets of CoQA and MultiRC.",
"tently on RACE-H, MultiRC and DREAM in terms of all the metrics.",
"However, the improvement on RACE-M is limited ( 1 . 0 gain on the test sets).",
"The reason may be that RACE-M is much simpler than RACE-H, and thus, it is not challenging for the evidence extractor of BERT-HA to find the correct evidence on RACE-M.",
"Table 4 shows the exact match scores and F1 scores on Quasar-T.",
"Distant evidence supervision (DS) indicates whether a passage contains the answer text.",
"Compared with the base models DSQA and DSQA+DS, DSQA+STM achieved better performance in both metrics, which verifies that DSQA can also benefit from Self-Training.",
"Our method is general and can improve both lightweight and heavyweight models, like LSTM-based and BERT-based models, in different tasks.",
"To evaluate the performance of STM on evidence extraction, we validated the evidence labels generated by several methods on the development sets of CoQA and MultiRC.",
"Considering that the evidence of each question in MultiRC is a set of sentences, we adopted precision @ k and recall @ k as the metrics for MultiRC, which represent the precision and recall of the generated evidence labels, respectively, when k sentences are predicted as evidence.",
"We adopted only precision @1 as the metric for CoQA as this dataset provides each question with one golden evidence sentence.",
"Table 5 shows the performance of five methods for evidence labeling on the CoQA and MultiRC development sets.",
"It can be seen that BERT-HA+STM outperformed the base model BERT-HA by a large margin in terms of all the metrics.",
"As a result, the evidence extractor augmented with STM provided more evidential information for the answer predictor, which may explain the improvements of BERT-HA+STM on the two datasets.",
"To examine whether error propagation exists and how severe it is in STM, we visualized the evolution of evidence predictions on the development set of CoQA (Figure 3).",
"From the inside to the outside, the four rings show the statistic results of the evidence predicted by BERT-HA (iteration 0) and BERT-HA+STM (iteration 1, 2, 3).",
"Each ring is composed of all the instances from the development set of CoQA, and each radius corresponds to one sample.",
"If the evidence of an instance is predicted correctly, the corresponding radius is marked in green, otherwise in purple.",
"Two examples are shown in the appendix due to space limit.",
"Self-correction.",
"As the innermost ring shows, about 80% of the evidence predicted by BERT-HA (iter 0) was incorrect.",
"However, the proportion of wrong instances reduced to 60% after self-training (iter 3).",
"More concretely, 27% of the wrong predictions were gradually corrected with high confidence within three self-training iterations, as exemplified by instance A in Figure 3.",
"Error propagation.",
"We observed that 4% of the evidence was mistakenly revised by STM, as exemplified by instance B in Figure 3.",
"In such a case, the incorrect predictions are likely to be retained in the next iteration.",
"But almost 50% of such mistakes were finally corrected during the subsequent iterations like instance C .",
"This observation shows that STM can prevent error propagation to avoid catastrophic failure.",
"To evaluate the improvement of STM over stronger pre-trained models, we employed RoBERTa-large (Liu et al., 2019) as the encoder in the base model.",
"Table 6 shows the results on CoQA.",
"STM significantly improved the evidence extraction (Evi. Acc) of the base model.",
"However, the improvement on answer prediction (Ans. Acc) is marginal.",
"One reason is that RoBERTa-HA achieved such a high performance that there was limited room to improve.",
"Another possible explanation is that evidence information is not important for such stronger models to generate answers.",
"In other words, they may be more adept at exploiting data bias to make answer prediction.",
"In comparison, weaker pre-trained models, such as BERT-base, can benefit from evidence information due to their weaker ability to exploit data bias.",
"We present an iterative self-training method (STM) to improve MRC models with soft evidence extraction, when golden evidence labels are unavailable.",
"In this iterative method, we train the base model with golden answers and pseudo evidence labels.",
"The updated model then generates new pseudo evidence labels, which can be used as additional supervision in the next iteration.",
"Experiment results show that our proposed method consistently improves the base models in seven datasets for three MRC tasks, and that better evidence extraction indeed enhances the final performance of MRC.",
"As future work, we plan to extend our method to other NLP tasks which rely on evidence finding, such as natural language inference.",
"This work was jointly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096), and the National Key R&D Program of China (Grant No.",
"2018YFC0830200).",
"We thank THUNUS NExT Joint-Lab for the support."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"objective",
"objective",
"other",
"other",
"other"
] |
[
"In this study, we investigate robustness against covariate drift in spoken language understanding (SLU).",
"Covariate drift can occur in SLU when there is a drift between training and testing regarding what users request or how they request it.",
"To study this we propose a method that exploits natural variations in data to create a covariate drift in SLU datasets.",
"Experiments show that a state-of-the-art BERT-based model suffers performance loss under this drift.",
"To mitigate the performance loss, we investigate distributionally robust optimization (DRO) for finetuning BERT-based models.",
"We discuss some recent DRO methods, propose two new variants and empirically show that DRO improves robustness under drift.",
"A common assumption in machine learning is that training data and test data are independent and identically distributed (i.i.d.).",
"Unfortunately, this may not hold in practice and the test distribution might have drifted from the training distribution which can lead to a significant drop of performance in real-world applications (Moreno-Torres et al., 2012).",
"Consider spoken language understanding (SLU), i.e., the task of mapping an utterance to a machine readable semantic interpretation, which is commonly used in voice controlled devices like Alexa, Siri or Google Assistant.",
"Distributional drifts can be caused by seasonal and non-seasonal factors.",
"For example, festive holidays can lead to many requests outside the daily routine.",
"New users might use an uncommon phrasing to express their intent or they might request an uncommon song to be played.",
"Such drifts in the input distribution are referred to as covariate drift.",
"When users' requests fail to be recognized by the device they might rephrase their intent until they succeed, essentially adapting to the SLU model's distribution.",
"This means that, even when new training samples are drawn from new user utterances, the dominance of the old distribution already present in the SLU model is reinforced.",
"Fine-tuned pre-trained language models (PLM), such as BERT (Devlin et al., 2019) yield strong performance on SLU benchmarks (Chen et al., 2019).",
"Yet, it has been observed that also PLMs are vulnerable to drifts, and there is a high interest in understanding the robustness of PLMs (Oren et al., 2020a; McCoy et al., 2019; Tu et al., 2020; Cao et al., 2020).",
"The goals of this study are to investigate the impact of covariate drift on BERT's performance, and to experimentally investigate distributionally robust optimization (DRO) for finetuning BERT.",
"While we focus on sequence classification for SLU, i.e., intent classification (IC) and slot filling (SF), we expect the insights of this study to be applicable also to other sequence classification tasks.",
"To study the impact of covariate drift on model robustness, we require a dataset with known properties of the drift.",
"However, real data for this setting is not publicly available.",
"Therefore, we devised a method to create a train/test split with a controlled drift for sequence labeling data, which we call SEQDRIFT .",
"Roughly speaking, SEQDRIFT creates clusters of examples based on the example's tokens and sequence labels.",
"Then those clusters are used to create a new train/test split while leaving the label distribution intact.",
"Notably, SEQDRIFT does not artificially alter the utterances and only exploits natural lexical variations in the data in a non-adversarial manner.",
"Our experiments on publicly available SLU datasets repartitioned with SEQDRIFT showed that a state-of-the-art BERT-based model for SLU (Chen et al., 2019) trained with standard optimization suffers up to 5% absolute performance loss.",
"Currently, it is an open question which range of measures are helpful to improve the generalization under drift.",
"In this study, we investigated distributionally robust optimization (DRO), which has 1970 recently gained interest in NLP for overparameterized models (Oren et al., 2019; Sagawa et al., 2020; Liu et al., 2021; Michel et al., 2021).",
"It is an optimization concept which assumes that the training data is a mixture of distributions, e.g., different user demographics.",
"The objective is to be optimal under each distribution.",
"For example, the methods proposed by Oren et al. (2019) and Sagawa et al. (2020) assume knowledge about groups of training instances such as topics or ethnic origin that can be used by the optimization.",
"Roughly speaking, they propose to compute the loss across groups instead across individual instances.",
"However, such group knowledge might not be available and there are other methods which do not require such prior knowledge.",
"TOPK (Levy et al., 2020; Kawaguchi and Lu, 2020), for example, simply uses the top-k largest losses in a batch and was shown to obtain robust models.",
"We performed an extensive experimental analysis to investigate the usefulness of several DRO methods across different scenarios.",
"Most studies only evaluate DRO methods in one setting with in-distribution validation data and one drift type per dataset.",
"To achieve a broader insight into the usefulness of the investigated methods we evaluated them in 8 scenarios per dataset, i.e., for different types of drift, or model selection with in-distribution and out-of-distribution validation data.",
"Additionally, we propose an intuitive variant of TOPK , namely TOPK-GROUP or TOPK-AUTOENCODER to investigate if prior group knowledge or latent group knowledge could improve TOPK .",
"We found that TOPK , TOPK-GROUP and TOPK-AUTOENCODER can significantly improve robustness in many scenarios, where TOPK is more reliable in terms of significant improvement, while TOPK-AUTOENCODER can be better in terms of relative improvement.",
"In this section, we provide a brief background to spoken language understanding.",
"Subsequently, we discuss common categorizations of dataset drifts, empirical analyses of drifts and then describe distributionally robust optimization.",
"In this study, we focus on SLU for single-turn utterances and non-nested intents.",
"Parsing utterances into API calls is broadly either done by task oriented semantic parsing (TOP), or as intent classification (IC) and slot filling (SF).",
"IC is the task to classify an utterance into user intents, such as PlayMusic , FindBook or GetWeather .",
"Meanwhile, SF is a sequence tagging task to identify spans of tokens that represent the intent's slot fillers, such as ArtistName , AlbumName or TrackNumber .",
"In state-of-the-art approaches, IC and SF are typically modeled jointly using deep neural networks (Chen et al., 2019).",
"A common assumption in machine learning is that the training and test data are independent and identically distributed (i.i.d.) and that the distributions are the same between training and test, i.e. P train ( x, y ) = P test ( x, y ) .",
"Unfortunately, in practice, the test data is often out of distribution (o.o.d.), i.e. P train ( x, y ) (cid:54) = P test ( x, y ) .",
"This can be caused by sampling bias, such that subpopulations are not equally represented in the samples of the two distributions.",
"As this phenomenon is often caused by time-varying covariates, i.e., seasonal and nonseasonal changes, this phenomenon is referred to as a drift .",
"However, this can be a general mismatch between the sampled subpopulations, which can be of geographic or demographic type, or they can be topics or domains.",
"Drifts can also be caused by noise or automatic training data generation, in which filtering heuristics introduce a systematic issue, or by adversaries that exploit weaknesses of a specific model or model class.",
"Distributional drifts can be categorized into (Moreno-Torres et al., 2012): concept drift, i.e., when the meaning changes, prior probability drift and covariate drift.",
"Covariate drift.",
"When P train ( x ) (cid:54) = P test ( x ) , then there is a drift in the input distribution, and when the concept P ( y | x ) does not change between training and testing, this is referred to as covariate drift.",
"The challenge is that the population and its subpopulations are unknown because only samples are observed and thus P ( x | y ) cannot be perfectly learned.",
"For covariate drift a model has to generalize to samples from subpopulations that are almost unseen, e.g., a spam classifier should generalize to a new spam campaign ( x, y ) test = ( word_in_email=cheap, is_spam=True ) while only observing ( x, y ) train = ( word_in_email=money, is_spam=True ) .",
"One of the conjectures for transfer learning using PLMs is 1971 that task finetuning PLMs can generalize to inputs that are semantically similar to training instances.",
"One reason for a lack of robustness to covariate drift is when models overfit on patterns between the input and the desired labels, e.g., when they would learn that only cheap is a predictor for spam.",
"Another reason can be spurious correlations between patterns in the input and the label.",
"Sagawa et al. (2020); Tu et al. (2020) showed this for entailment prediction.",
"They grouped instances by a certain input attribute (does or does not contain negation) and target labels (is or is not entailment).",
"The attribute did not have any direct relation to the label.",
"Then they partitioned the data in such a way that there was a correlation between the polarity of the attribute and the label in the training data and an inverse correlation in the test data: D train = [( negation = + , entailment = ) , ( negation = , entailment = +)] and D test = [( negation = + , entailment = +)] , ( negation = , entailment = )] .",
"They showed that models learn this correlation instead of the semantics of the actual task and then fail on test instances.",
"Currently, there is a rising interest to investigate distributional drifts in various domains (Tu et al., 2020; Sagawa et al., 2020; Koh et al., 2021; Dunn et al., 2020; Shankar et al., 2019; Oren et al., 2020b), most prominently in Computer Vision (CV), but also in NLP.",
"To study distributional drifts, researchers need datasets with a controlled drift between training and test data.",
"This can be broadly achieved in two ways: Synthetic methods.",
"A dataset drift is created by corrupting the input features with synthetic noise, for example, adding pixel noise (Goodfellow et al., 2015), perturbing the input with generative models (Dunn et al., 2020) or perturbing characters and words (Cao et al., 2020).",
"It has been observed that robustness against synthetic noise does not imply robustness against semantically meaningful perturbations (Koh et al., 2021; Dunn et al., 2020).",
"Natural variations.",
"Another option is to exploit natural variations, for example, using video frames for which a model's object prediction flips between adjacent frames (Shankar et al., 2019).",
"Koh et al. (2021) collected a large benchmark for naturally occuring drifts in CV and NLP, e.g., user demographics for toxicity detection in online comments.",
"Sgaard et al. (2021) investigated the difference of model performance comparing random splits with heuristics like splitting the data based on sentence length or by maximizing the divergence of the token feature vectors of the train and test split.",
"In this work, we exploit natural variations in the data to create a drift in a non-adversarial manner.",
"Our conjecture is that this setting is a good proxy to a realistic evaluation scenario.",
"Robustness.",
"The robustness of a machine learning model is the property that characterizes how effective the model is while being tested on a new dataset.",
"In this paper, robustness is formally defined as follows.",
"Let D be a dataset split into D train , D valid , D test and let E be a performance measure E : D R (w.l.o.g. greater is better).",
"We assume that there is a covariate drift between D train and D test .",
"Given two models A and B with parameters A , B estimated on D train .",
"We call a model B more robust than model A when E ( B , D test ) > E ( A , D test ) and E ( B , D valid ) E ( B , D test ) < E ( A , D valid ) E ( A , D test ) Empirical Risk Minimization (ERM ).",
"Commonly used optimization algorithms assume that all examples are from the same population.",
"This assumption stems from ERM 's optimization objective which treats each example in D train P with equal importance, i.e., = inf EP [ l ( x, y ; )] .",
"This optimization may have a negative impact on model robustness.",
"For example, Tu et al. (2020) found that using ERM for finetuning PLMs learns spurious correlations even in the presence of a few helpful counter examples.",
"Distributionally Robust Optimization (DRO).",
"DRO is based on the assumption that D train consists of samples from many subpopulations, i.e., distributions Q from an uncertainty set U ( x, y ) .",
"The objective in DRO is then to optimize the parameters such that they are optimal under the worst case distribution in U ( x, y ) , i.e., = inf sup QEQ [ l ( x, y ; )] .",
"DRO is effective when the proportions of the distributions Q are highly skewed in D train .",
"For example, this can help to avoid learning spurious correlations, because even very few counter examples in the data are amplified.",
"The challenge in applying the DRO concept is that the subpopulations are not observable and U ( x, y ) has to be modeled by some prior knowledge about the data.",
"Our goal is to study the impact of covariate drift on model performance.",
"Therefore, we need a benchmark with controlled drift, but currently there are no publicly available SLU benchmarks in which real drifts can be studied.",
"As motivated in Section 2.3, we do not want to employ synthetic noise, i.e., our goal is to design a method that exploits natural variations in the data.",
"Moreover, the method should not be adversarial, i.e., not designed or optimized to target a specific model or model class.",
"Instead, we target two semantic drifts that might occur in real data due to: how users express their intent, and what users request.",
"We conjecture that it is possible to capture how users express themselves by creating clusters of utterances with similar slot contexts.",
"To capture what users request could be achieved by clusters of utterances with similar slot values.",
"A drift can then be created by partitioning the data based on those clusters into training and testing.",
"We avoid creating a mismatch of the label distributions between training and testing.",
"If a mismatch would occur, it would not be possible to derive conclusions about covariate drift from changes in performance because the shift in the label distribution also leads to changes in the measured performance.",
"In the following, we describe our approach in detail.",
"The high-level overview for creating a drift dataset version is as follows:",
"(i) Join all splits from the original data.",
"(ii) Transform examples into feature representations.",
"(iii) Use spectral clustering to obtain K clusters based on the feature representations.",
"(iv) Create the test split based on the clusters by sampling clusters instead of sampling examples.",
"Slot value drift To cluster examples by what users request\" we chose the feature representation of slot value n-grams. Table 1 shows an example in which only the slot values (the non-gray cells) are used to generate n-grams for an utterance, e.g. song or too poetic . The expected effect of splitting the data based on clusters of examples using this representation is that the training split is missing certain slot values, and thus we encounter unseen artists during testing. Slot context drift The feature representation to cluster training examples by how users express an intent or slot\" are n-grams of slot labels and the tokens around them. For example, using only the non-gray cells in Table 2 to generate n-grams would yield add B-SONG by B-ARTIST or to my B-PLAYLIST as features to represent the example.",
"The expected effect of this drift is that the test data contains phrases which are not seen during training.",
"Now, using the feature representation for either the slot value drift or slot context drift , we use spectral clustering to create K clusters and proceed to create the data splits.",
"Test split.",
"First the test split is created by sampling clusters and all the clusters' examples are added to the test split.",
"To avoid a mismatch of the label distribution between training and the new test split, the method uses a projected label count per split to decide whether a cluster can be used.",
"For example, let's assume we defined a 5% test split percentage and there are 1000 examples with the intent-slot label PLAYMUSIC-ARTIST.",
"Then the test split should have 50 examples with the 1973 intent-slot label PLAYMUSIC-ARTIST.",
"Hence, a cluster which contains 70 examples with the intent-slot label PLAYMUSIC-ARTIST cannot be used for the test split because it would exceed the projected label count.",
"Thus, when a cluster is sampled all of its examples are added to the test split if they do not disturb the projected label count.",
"This is repeated until all clusters have been sampled once and have been added to the test split or not.",
"When the test split does not match the projected label count, it is filled using random examples from clusters that have not been used for test so far.",
"These examples do not count into the controlled drift.",
"O.O.D. validation One variation is that the validation data could be o.o.d. instead of distributed like the training data.",
"This is a hypothetical setting in which we have access to o.o.d. data for validation and can observe to what degree hyperparameter tuning and model selection do factor into the drift effect.",
"To achieve this we create the validation data in the same way as the test data, but validation and test do not share drift clusters.",
"Full drift and partial drift In the default behavior all the examples of a cluster are shifted into the test split which we call a full drift .",
"However, a natural question is what happens when a small percentage of a test cluster leaks into training.",
"We call this setting a partial drift .",
"In the experiments in Section 5 we will show that using ERM optimization on the SEQDRIFT partitioned datasets is not robust.",
"There might be many measures to mitigate this effect, and the best solution will most likely consist of a mix of methods.",
"One candidate is DRO that has seen a rising interest to be applied to overparameterized models.",
"In the following, we first briefly discuss the setting of finetuning a pretrained language model (PLM), and subsequently we describe existing and proposed DRO methods.",
"In our setup, a pretrained language model M consists of a pretrained encoder ENC , and one (or more) task classifier head(s) C task .",
"Let X be a batch of inputs of size b , then the hidden representations of M are the output of the encoder X enc = ENC ( X ) .",
"For example, in our study we denote the averaged hidden token representations of size d after the last layer of BERT as X enc R b d .",
"To finetune M for a new task, the parameters of the encoder and the task classifier heads are optimized with a loss function L task to obtain the task batch loss l task = L task ( C task , X enc ) R b .",
"The following methods differ mainly in the way they manipulate the task batch loss l task .",
"The following DRO methods are by no means exhaustive.",
"They represent either methods proposed so far in NLP or have a desirable property, e.g., being simple or conceptually interesting.",
"The main differences between the methods is that they either use or do not use group knowledge in their objective.",
"Those models that do require knowledge about groups in the data will use the clusters created by the SEQDRIFT algorithm.",
"However, using the SEQDRIFT clusters is somewhat artificial because this is perfect information.",
"Therefore, we are especially interested in methods that do not require group knowledge.",
"TOPIC-CVAR This method was proposed by Oren et al. (2019) for language modeling.",
"They use a topic model to obtain a distribution over topics for each sentence to model the uncertainty set.",
"The core idea is to accumulate the losses for each topic over the course of training.",
"In each update a subset of losses in l task is selected, i.e., the losses of those batch items that are assigned to the topic that currently lies in the upper percentile of accumulated losses.",
"GROUP-DRO This method was proposed by Sagawa et al. (2020) for data where groups are known such that each example is assigned to one group.",
"Similar to TOPIC-CVAR their method keeps statistics of the accumulated losses, but for groups rather than for topics.",
"In GROUP-DRO the batch losses in l task are first averaged per group and the final loss is a weighted average over group losses.",
"For batch construction their method upsamples groups reciprocally to their frequency.",
"TOPK This method does not require group knowledge and is simple to implement: it simply computes the loss as an average over the topk largest losses in l task (Levy et al., 2020; Kawaguchi and Lu, 2020).",
"We found TOPK to be very effective in initial experiments.",
"By contrast, GROUP-DRO and TOPICCVAR did not perform well in our setting, even though both have been shown to work well.",
"Thus, we propose the following TOPK variants: TOPK-GROUP .",
"If group information is available, can TOPK be improved by it?",
"Here the idea is similar to TOPIC-CVAR and GROUP-DRO to use the precomputed SEQDRIFT clusters as groups and compute the TOPK loss per group.",
"Then only the largest TOPK group loss is picked, which has the effect of upsampling difficult groups and downsampling easy groups over the course of training.",
"However, when the precomputed SEQDRIFT clusters are used, this is more an oracle, i.e., an upper bound of how much can be inferred from the training data using perfect information.",
"TOPK-AUTOENCODER (TOPK-AE).",
"What if we do not have access to the precomputed clusters?",
"Could we approximate them using the PLM's hidden representations X enc ?",
"Our idea is to use the X enc representations to cluster the b batch items into c latent groups.",
"The latent groups are then used in the loss computation like in TOPK-GROUP .",
"The clustering is obtained from an autoencoder which is trained on X enc and is continuously updated during training.",
"Thus, the group assignment of a training example can change over the course of training according to the model's changing hidden representations.",
"We investigated hard cluster assignment TOPK-AE-BIN and soft cluster assignment TOPK-AE-PROB .",
"See Appendix B for all the details regarding the autoencoder and its training.",
"Discussion Table 3 compares the different methods discussed in this study, and shows if the method relies on precomputed groups or if the groups are implicit or adaptively inferred during training.",
"In this section, we present our experiments to investigate the following questions: (Q1) Does the standard optimization ERM suffer a performance loss under the SEQDRIFT covariate drift?",
"(Q2)",
"How well can ERM and DRO methods exploit a scarce signal about the test distribution, i.e., when is DRO relevant?",
"(Q3)",
"As all optimization methods come with hyperparameters, how much better could each method perform with access to o.o.d. validation data to optimize hyperparameters and perform early stopping?",
"Would DRO still be better than ERM ?",
"(Q4)",
"Are the DRO methods more robust than ERM against the SEQDRIFT covariate drift, and which DRO method is the most effective?",
"SLU Model.",
"We use the JointBERT model for SLU (Chen et al., 2019).",
"Two small changes that we introduce are:",
"(i) an intent loss scaler for the joint tasks loss L = L slot + L intent and (2) using softmax layer instead of CRF for the sequence tagging classifier.",
"We established the usefulness of those two changes with a hyperparameter study 1 .",
"The source datasets for SEQDRIFT are four commonly used SLU benchmarks, which are listed in Table 6.",
"All technical details and settings for SEQDRIFT are discussed in Appendix A. Table 5 shows an excerpt from a cluster from the ATIS dataset and demonstrates how the slot context drift cluster contains examples with similar phrases, in this case utterances with the phrase between B-from.city and B-to.city .",
"Use of datasets.",
"To study robustness it is inevitable to look at test performance.",
"Thus, we did not use all datasets for all stages of experimentation: Prototyping of SEQDRIFT was only done on ATIS, and then final experiments with ERM were done on all four datasets.",
"The prototyping and initial experiments for the DRO methods were mostly done on ATIS and a few trials on SNIPS.",
"The final DRO experiments were conducted on SNIPS and TOP.",
"All dataset scenarios In total we can evaluate a method in eight different scenarios per dataset, i.e., the cross product of {slot value drift, slot context drift} {partial drift, full drift} {i.i.d. validation, o.o.d. validation}.",
"Table 4 shows the resulting statistics for the datasets SNIPS and TOP-NN.",
"The percentage of examples resulting from a controlled drift in the test set are 66 79% for SNIPS and 44 47% for TOP.",
"For the scenario with partial drift 2 3% of the training data split belong to clusters that have been deliberately shifted into the test split.",
"Metrics.",
"We use the following metrics: F1 the slot F1 metric; ACCURACYthe intent accuracy; COMBINED-IC-SF the average of F1 and Accuracy.",
"Hyperparameters.",
"To ensure a fair comparison of methods in the experiments, we performed a hyperparameter search with the objective to optimize for COMBINED-IC-SF for each optimization method for four scenarios {slot value drift, slot #int.",
"context drift} {i.i.d. validation, o.o.d. validation} with partial drift (see Section 3.4).",
"For each setting we ran 8 hyper-parameter optimization steps 2 , then picked the two best hyper-parameter settings and retrained them with a different random seed.",
"Then we reused the best hyper-parameters for the full drift.",
"See Appendix C and Table C.9 for the remaining details about the hyperparameters.",
"(a) Drop in absolute ERM performance from validation to test on four datasets with two drift types (slot value, slot context) and drift percentage (full, partial).",
"(b) Amount of times a method was significantly worse or better than ERM on SNIPS for either full drift or partial drift out of 4 scenarios each.",
"(c) Amount of times a method was significantly worse or better than ERM on SNIPS when the validation data is i.i.d. or o.o.d., i.e. out of 4 scenarios each.",
"The reported results for each scenario are always averaged from four models, i.e., the models obtained with the two best hyperparameter settings that each have been trained with two random seeds.",
"We computed significance with p < 0 .",
"05 between models with approximate randomization (Noreen, 1989).",
"Due to the repetition with different random seeds, this effectively results in a family-wise error rate of 0 .",
"185 .",
"In Figures 1b and 1c we count in how many scenarios a DRO method was significantly better or worse than ERM .",
"The remaining instances performed the same as ERM .",
"(Q1)",
"Does the SEQDRIFT covariate drift lead to a drop in performance for ERM ?",
"In Figure 1a, it can be observed that ERM 's performance does drop up to 5% in slot F1 between validation and test.",
"However, the amount of change varies between datasets.",
"In most scenarios, slot F1 suffers a higher drop in performance than intent accuracy.",
"Slot context drift yields a higher loss than the slot value drift , so it seems that it is easier to generalize to unseen slot values than to unseen phrases.",
"This makes intuitively sense, e.g., Please play New Unknown Artist. can be recognized by just knowing the sequence please play B-ARTIST I-ARTIST ... , but it is more difficult to generalize to a new unseen phrase.",
"See Appendix D Table 10 for the numerical results.",
"which 2 3% of the training data are leaked examples from test clusters.",
"Thus, during training there is some information about test clusters that could be exploited.",
"For ERM , Figure 1a shows indeed that the partial slot context drift leads to a smaller drop in performance than the full slot context drift .",
"Thus, we conclude that ERM can exploit this information.",
"For DRO, Figure 1b shows that there are less scenarios with significant improvement from DRO methods over ERM with a partial drift than with a full drift (see numerical results Appendix D Table 14).",
"It is important to note that all methods E RM and DRO do improve, but ERM improves more than most of the DRO methods.",
"Only TOPKGROUP and TOPK still improve over ERM .",
"Anecdotally, in a SEQDRIFT setting in which only 80% or less of the clusters are drifted into the test split and the percentage of the drift cluster examples make up more than 5% of the training data, the significant improvement of all the DRO methods vanishes.",
"(Q3)",
"Does o.o.d. validation data help hyperparameter optimization and early stopping?",
"In Figure 1c, we observe that the amount of significant improvement over ERM shrinks when the validation data is o.o.d. and thus contains information how to perform well for the test split.",
"This affects hyper-parameter optimization and early stopping which also helps ERM to obtain a model from the training data that performs better on the test distribution.",
"This can serve as an upper bound of possible improvement that can be derived from the 1977 significantly worse significantly better 0 2 4 6 8 10 12 14 16 GROUP-DRO CVaR TOPK-AE-BIN TOPK TOPK-GROUP Figure 2: Amount of times a method was significantly worse or better than ERM on SNIPS and TOP out of 16 scenarios.",
"training data alone.",
"See Appendix D Table 13 for the detailed numerical results.",
"(Q4)",
"Are the DRO methods more robust against the drifts than ERM ?",
"Figure 2 shows in how many scenarios the DRO methods improved significantly over ERM on the SNIPS and TOP dataset.",
"TOPK and TOPK-GROUP only improve significantly over ERM and do not perform worse.",
"TOPK-AE-BIN only performs worse one time but otherwise the same or better.",
"The results indicate that TOPK -based methods do improve robustness.",
"TOPK-GROUP performs best amongst all methods, i.e., group information helps TOPK -based methods.",
"TOPK-AE-BIN performs slightly worse in terms of significant improvement than TOPK , however, in terms of average relative improvement over ERM it is on par or better than TOPK (see Tables 11 and 12 in the Appendix).",
"Yet, the lesser amount of significant improvement of TOPK-AE-BIN in comparison to TOPK-GROUP shows that approximating the group information is difficult.",
"Without perfect group information a simple method like TOPK might be the most reliable method to obtain a robust model.",
"See more detailed results in Appendix D Table 11 and Table 12.",
"Discussion GROUP-DRO , TOPIC-CVAR and TOPK-GROUP use the SEQDRIFT clusters in their optimization.",
"Therefore, these results should be rather seen as an upper bound of how much can be inferred from the training data using perfect information.",
"Still, GROUP-DRO and TOPIC-CVAR both fail to perform well in this experiment.",
"Note that both methods had the same amount of budget for hyper-parameter optimization as other methods.",
"For GROUP-DRO we used the authors' published code 3 and also their implementation of TOPICCVAR .",
"Our conjectures about this finding are: (1) GROUP-DRO and TOPIC-CVAR both have been proposed and studied for groups that have much higher lexical variance than the groups in our data.",
"The groups in our dataset consist by construction of many examples with similar lexical patterns and can be of small size, i.e., as little as 10 examples.",
"This might explain why they seem to overfit heavily.",
"(2) Another difference to our methods is that our proposed methods do not use an exponential average of historical group loss statistics.",
"We studied finetuning BERT for SLU datasets with covariate drift.",
"We presented the SEQDRIFT method to induce a covariate drift for SLU sequence classification tasks.",
"The experimental results showed that this drift in the input distribution leads to a drop in performance on four SLU datasets for a common BERT-based SLU model finetuned with ERM .",
"We investigated DRO methods that either use or do not use knowledge about groups in the data.",
"Our empirical results in an extensive study indicate that TOPK -based DRO methods are successful in improving robustness on the drift datasets.",
"We would like to thank Rainer Gemulla, Patrick Lehnen and ACL reviewers for helpful feedback for revising and improving the paper."
] | [
"objective",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"result",
"method",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"method",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"other",
"abstain",
"method",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"result",
"other"
] |
[
"Predicting the emotional value of lexical items is a well-known problem in sentiment analysis.",
"While research has focused on polarity for quite a long time, meanwhile this early focus has been shifted to more expressive emotion representation models (such as Basic Emotions or Valence-Arousal-Dominance).",
"This change resulted in a proliferation of heterogeneous formats and, in parallel, often small-sized, non-interoperable resources (lexicons and corpus annotations).",
"In particular, the limitations in size hampered the application of deep learning methods in this area because they typically require large amounts of input data.",
"We here present a solution to get around this language data bottleneck by rephrasing word emotion induction as a multi-task learning problem.",
"In this approach, the prediction of each independent emotion dimension is considered as an individual task and hidden layers are shared between these dimensions.",
"We investigate whether multi-task learning is more advantageous than single-task learning for emotion prediction by comparing our model against a wide range of alternative emotion and polarity induction methods featuring 9 typologically diverse languages and a total of 15 conditions.",
"Our model turns out to outperform each one of them.",
"Against all odds, the proposed deep learning approach yields the largest gain on the smallest data sets, merely composed of one thousand samples.",
"Deep Learning (DL) has radically changed the rules of the game in NLP by dramatically boosting performance figures in almost all applications areas.",
"Yet, one of the major premises of high-performance DL engines is their dependence on huge amounts of training data.",
"As such, DL seems ill-suited for areas where training data are scarce, such as in the field of word emotion induction.",
"We will use the terms polarity and emotion here to distinguish between research focusing on se-mantic orientation (Hatzivassiloglou and McKe-own, 1997) (the positiveness or negativeness) of affective states, on the one hand, and approaches which provide predictions based on some of the many more elaborated representational systems for affective states, on the other hand.",
"Originally, research activities focused on polarity alone.",
"In the meantime, a shift towards more expressive representation models for emotion can be observed that heavily draws inspirations from psychological theory, e.g., Basic Emotions (Ek-man, 1992) or the Valence-Arousal-Dominance model (Bradley and Lang, 1994).",
"Though this change turned out to be really beneficial for sentiment analysis in NLP, a large variety of mutually incompatible encodings schemes for emotion and, consequently, annotation formats for emotion metadata in corpora have emerged that hinder the interoperability of these resources and their subsequent reuse, e.g., on the basis of alignments or mergers (Buechel and Hahn, 2017).",
"As an alternative way of dealing with thus unwarranted heterogeneity, we here examine the potential of multi-task learning (MTL; Caruana (1997)) for word-level emotion prediction.",
"In MTL for neural networks, a single model is fitted to solve multiple, independent tasks (in our case, to predict different emotional dimensions) which typically results in learning more robust and meaningful intermediate representations.",
"MTL has been shown to greatly decrease the risk of overfitting (Baxter, 1997), work well for various NLP tasks (Setiawan et al., 2015; Liu et al., 2015; Sgaard and Goldberg, 2016; Cummins et al., 2016; Liu et al., 2017; Peng et al., 2017), and practically increases sample size, thus making it a natural choice for small-sized data sets typically found in the area of word emotion induction.",
"After a discussion of related work in Section 2, we will introduce several reference methods and describe our proposed deep MTL model in Section",
"3. In our experiments (Section 4), we will first validate our claim that MTL is superior to single-task learning for word emotion induction.",
"After that, we will provide a large-scale evaluation of our model featuring 9 typologically diverse languages and multiple publicly available embedding models for a total of 15 conditions.",
"Our MTL model surpasses the current state-of-the-art for each of them, and even performs competitive relative to human reliability.",
"Most notably however, our approach yields the largest benefit on the smallest data sets, comprising merely one thousand samples.",
"This finding, counterintuitive as it may be, strongly suggests that MTL is particularly beneficial for solving the word emotion induction problem.",
"Our code base as well as the resulting experimental data is freely available.",
"1 2 Related Work This section introduces the emotion representation format underlying our study and describes external resources we will use for evaluation before we discuss previous methodological work.",
"Emotion Representation and Data Sets.",
"Psychological models of emotion can typically be subdivided into discrete (or categorical ) and dimensional ones (Stevenson et al., 2007; Calvo and Mac Kim, 2013).",
"Discrete models are centered around particular sets of emotional categories considered to be fundamental.",
"Ekman (1992), for instance, identifies six Basic Emotions (Joy, Anger, Sadness, Fear, Disgust and Surprise).",
"In contrast, dimensional models consider emotions to be composed of several influencing factors (mainly two or three).",
"These are often referred to as Valence (a positivenegative scale), Arousal (a calmexcited scale), and Dominance (perceived degree of control over a (social) situation)the VAD model (Bradley and Lang (1994); see Figure 1 for an illustration).",
"Many contributions though omit Dominance (the VA model) (Russell, 1980).",
"For convenience, we will still use the term VAD to jointly refer to both variants (with and without Dominance).",
"VAD is the most common framework to acquire empirical emotion values for words in psychology.",
"Over the years, a considerable number of such resources (also called emotion lexicons) have emerged from psychological research labs (as well as some NLP labs) for diverse languages.",
"The emotion lexicons we use in our experiments are listed in Table",
"1. An even more extensive list of such data sets is presented by Buechel and Hahn (2018).",
"For illustration, we also provide three sample entries from one of those lexicons in Table",
"2. As can be seen, the three affective dimensions behave complementary to each other, e.g., terrorism and orgasm display similar Arousal but opposing Valence.",
"The task we address in this paper is to predict the values for Valence, Arousal and Dominance, given a lexical item.",
"As is obvious from these examples, we consider emotion prediction as a regression, not as a classification problem (see arguments discussed in Buechel and Hahn (2016)).",
"In this paper, we focus on the VAD format for the following reasons: First, note that the Valence dimension exactly corresponds to polarity (Turney and Littman, 2003).",
"Hence, with the VAD model, emotion prediction can be seen as a generalization over classical polarity prediction.",
"Second, to the best of our knowledge, the amount and diversity of available emotion lexicons with VAD encodings is larger than for any other format (see Table 1).",
"Word Embeddings.",
"Word embeddings are dense, low-dimensional vector representations of words trained on large volumes of raw text in an unsupervised manner.",
"The following are among today's most popular embedding algorithms: 1908 Source ID Language Format # Entries Bradley and Lang (1999) EN English VAD 1,034 Warriner et al. (2013) EN+ English VAD 13,915 Redondo et al. (2007) ES Spanish VAD 1,034 Stadthagen-Gonzalez et al. (2017) ES+ Spanish VA 14,031 Schmidtke et al. (2014) DE German VAD 1,003 Yu et al. (2016a) ZH Chinese VA 2,802 Imbir (2016) PL Polish VAD 4,905 Montefinese et al. (2014) IT Italian VAD 1,121 Soares et al. (2012) PT Portuguese VAD 1,034 Moors et al. (2013) NL Dutch VAD 4,299 Sianipar et al. (2016) ID Indonesian VAD 1,490 Table 1: Emotion lexicons used in our experiments (with their bibliographic source, identifier, language they refer to, emotion representation format, and number of lexical entries they contain).",
"WORD 2 VEC (with its variants SGNS and CBOW ) features an extremely trimmed down neural network (Mikolov et al., 2013).",
"FASTTEXT is a derivative of WORD 2 VEC , also incorporating sub-word character n-grams (Bojanowski et al., 2017).",
"Unlike the former two algorithms which fit word embeddings in a streaming fashion, GLOVE trains word vectors directly on a word co-occurrence matrix under the assumption to make more efficient use of word statistics (Pen-nington et al., 2014).",
"Somewhat similar, SVDPPMI performs singular value decomposition on top of a point-wise mutual information co-occurrence matrix (Levy et al., 2015).",
"In order to increase the reproducibility of our experiments, we rely on the following widely used, publicly available embedding models trained on very large corpora (summarized in Table 3): the SGNS model trained on the Google News corpus 2 (GOOGLE ), the FASTTEXT model trained on Common Crawl 3 (COMMON ), as well as the FASTTEXT models for a wide range of languages trained on the respective Wikipedias 4 (WIKI ).",
"Note that WIKI denotes multiple embedding models with different training and vocabulary sizes (see Grave et al. (2018) for further details).",
"Additionally, we were given the opportunity to reuse the English embedding model from Sedoc et al. (2017) (GIGA ), a strongly related contribution (see below).",
"Their embeddings were trained on the English Gigaword corpus (Parker et al., 2011).",
"Word-Level Prediction.",
"One of the early approaches to word polarity induction which is still popular today (Koper and Schulte im Walde, 2016) was introduced by Turney and Littman (2003).",
"They compute the polarity of an unseen word based on its point-wise mutual information (PMI) to a set of positive and negative seed words, respectively.",
"SemEval-2015 Task 10E featured polarity induction on Twitter (Rosenthal et al., 2015).",
"The best system relied on support vector regression (SVR) using a radial base function kernel (Amir et al., 2015).",
"They employ the embedding vector of the target word as features.",
"The results of their SVR-based system were beaten by the DENSIFIER algorithm (Rothe et al., 2016).",
"DENSIFIER learns an orthogonal transformation of an embedding space into a subspace of strongly reduced dimensionality.",
"Hamilton et al. (2016) developed SENTPROP , a graph-based, semi-supervised learning algorithm which builds up a word graph, where vertices correspond to words (of known as well as unknown polarity) and edge weights correspond to the similarity between them.",
"The polarity information is then propagated through the graph, thus computing scores for unlabeled nodes.",
"According to their evaluation, DENSIFIER seems to be superior overall, yet SENTPROP produces competitive results 1909 ID Language Method Corpus # Tokens # Types # Dimensions GOOGLE English SGNS Google News 1 10 11 3 10 6 300 COMMON English FASTTEXT Common Crawl 6 10 11 2 10 6 300 GIGA English CBOW Gigawords 4 10 9 2 10 6 300 WIKI all FASTTEXT Wikipeda 300 Table 3: Embedding models used for our experiments with identifier, language, embedding algorithm, training corpus, its size in the number of tokens, size of the vocabulary (types) of the resulting embedding model and its dimensionality.",
"For word emotion induction, a very similar approach to SENTPROP has been proposed by Wang et al. (2016a).",
"They also propagate affective information (Valence and Arousal, in this case) through a word graph with similarity weighted edges.",
"Sedoc et al. (2017) recently proposed an approach based on signed spectral clustering where a word graph is constructed not only based on word similarity but also on the considered affective information (again, Valence and Arousal).",
"The emotion value of a target word is then computed based on the seed words in its cluster.",
"They report to outperform the results from Wang et al. (2016a).",
"Contrary to the trend to graph-based methods, the best system of the IALP 2016 Shared Task on Chinese word emotion induction (Yu et al., 2016b) employed a simple feed-forward neural network (FFNN) with one hidden layer in combination with boosting (Du and Zhang, 2016).",
"Another very recent contribution which advocates a supervised set-up was published by Li et al. (2017).",
"They propose ridge regression, again using word embeddings as features.",
"Even with this simple approach, they report to outperform many of the above methods in the VAD prediction task.",
"6 Sentence-Level and Text-Level Prediction.",
"Different from the word-level prediction task (the one we focus on in this contribution), the determination of emotion values for higher-level linguistic units (especially sentences and texts) is also heavily investigated.",
"For this problem, DL approaches are meanwhile fully established as the method of choice (Wang et al., 2016b; Abdul-Mageed and Ungar, 2017; Felbo et al., 2017; Mohammad and Bravo-Marquez, 2017).",
"It is important to note, however, that the methods discussed for these higher-level units cannot easily be transferred to solve the word emotion induction problem.",
"Sentence-level and text-level architectures are either adapted to sequential input data (typical for RNN, LSTM, GRNN and related architectures) or spatially arranged input data (as with CNN architectures).",
"However, for word embeddings (the default input for word emotion induction) there does not seem to be any meaningful order of their components.",
"Therefore, these more sophisticated DL methods are, for the time being, not applicable for the study at hand.",
"In this section, we will first introduce various reference methods (two originally polarity-based for which we offer adaptations for VAD prediction) before defining our own neural MTL model and discussing its difference from previous work.",
"Let V := { w 1 , w 2 , ..., w m } be our word vocabulary and let E := { e 1 , e 2 , ..., e m } be a set of embedding vectors such that e i R n denotes the n dimensional vector representation of word w i .",
"Let D := { d 1 , d 2 , ..., d l } be a set of emotional dimensions.",
"Our task is to predict the empirically determined emotion vector emo ( w ) R l given a word w and the embedding space E .",
"where W is a matrix, W i contains the regression coefficients for the i -th affective dimension and b is the vector of bias terms.",
"The model parameters are fitted using ordinary least squares.",
"Technically, we use the scikit-learn.org implementation with default parameters.",
"Ridge Regression (RidgReg).",
"Li et al. (2017) propose ridge regression for word emotion induction.",
"Ridge regression works identically to linear regression during prediction, but introduces L 2 regularization during training.",
"Following the authors, for our implementation, we again use the scikit-learn implementation with default parameters.",
"Turney-Littman Algorithm (TL).",
"As one of the earliest contributions in the field, Turney and Littman (2003) defined a simple PMI-based approach to determine the semantic polarity SPTL of a word w : SPTL ( w ) := X s seeds + pmi ( w, s ) X s seeds pmi ( w, s ) (2) where seeds + and seeds are sets of positive and negative seed words, respectively.",
"Since this algorithm is still popular today (Koper and Schulte im Walde, 2016), we here provide a novel modifica-tion for adapting this originally polarity-based approach to word emotion induction with vectorial seed and output values.",
"First, we replace PMI-based association of seed and target word w and s by their similarity sim based on their word embeddings e w and e s : sim ( w, s ) := max (0 , e w e s || e w || || e s || ) (3) emo ( w ) := X s seeds + sim ( w, s ) X s seeds sim ( w, s ) (4) Although this step is technically not required for the adaptation, it renders the TL algorithm more comparable to the other approaches evaluated in Section 4 besides from most likely increasing performance.",
"Equation (4) can be rewritten as emo ( w ) := X s seeds sim ( w, s ) emo ( s ) (5) where seeds := seeds + seeds and emo ( s ) maps to 1 , if s seeds + , and 1 , if s seeds .",
"Equation (5) can be trivially adapted to an n dimensional emotion format by redefining emo ( s ) such that it maps to a vector from R n instead of { 1 , 1 } .",
"Our last step is to introduce a normalization term such that emo ( w ) TL lies within the range of the seed lexicon.",
"P (6) As can be seen from Equation (6), for the more general case of n -dimensional emotion prediction, the Turney-Littman algorithm naturally translates into a weighted average where the seed emotion values are weighted according to the similarity to the target item.",
"Densifier.",
"Rothe et al. (2016) train an orthogonal matrix Q R n n ( n being the dimensionality of the word embeddings) such that applying Q to an embedding vector e i concentrates all the polarity information in its first dimension such that the polarity of a word w i can be computed as SPDENSIFIER ( w i ) := pQe i (7) where p = (1 , 0 , 0 , ..., 0) T R 1 n .",
"For fitting Q, the seeds are arranged into pairs of equal polarity (the set pairs = ) and those of opposing polarity ( pairs 6 = ).",
"A good fit for Q will minimize the distance within the former and maximize the distance within the latter which can be expressed by the following two training objectives: argmin QX ( w i ,w j ) pairs = | pQ ( e i e j ) | (8) argmax QX ( w i ,w j ) pairs 6 = | pQ ( e i e j ) | (9) The objectives described in the expressions (8) and (9) are combined into a single loss function (using a weighting factor [0 , 1] ) which is then minimized using stochastic gradient descent (SGD).",
"To adapt this algorithm to dimensional emotion formats, we construct a positive seed set, seeds + v , and a negative seed set, seeds v , for each emotion dimension v D .",
"Let M v be the mean value of all the entries of the training lexicon for the affective dimension v .",
"Let SD v be the respective standard deviation and R , 0 .",
"Then all entries greater than M v + SD v are assigned to seeds + v and those less than M v SD v are assigned to seeds v .",
"Q is fitted individually for each emotion dimension v .",
"Training was performed according to the original paper with the exception that (following Hamilton et al. (2016)) we did not apply the proposed re-orthogonalization after each training 1911 step, since we did not find any evidence that this procedure actually results in improved performance.",
"The hyperparameters and were set to .",
"7 and .",
"5 (respectively) for all experiments based on a pilot study.",
"Since the original implementation is not accessible, we devised our own using tensorflow.org .",
"Boosted Neural Networks (ensembleNN).",
"Du and Zhang (2016) propose simple FFNNs in combination with a boosting algorithm.",
"An FFNN consists of an input or embedding layer with activation a (0) R n which is equal to the embedding vector e k when predicting the emotion of a word w k .",
"The input layer is followed by multiple hidden layers with activation a ( l +1) := ( W ( l +1) a ( l ) + b ( l +1) ) (10) where W ( l +1) and b ( l +1) are the weights and biases for layer l + 1 and is a nonlinear activation function.",
"Since we treat emotion prediction as a regression problem, the activation on the output layer a out (where out is the number of non-input layers in the network) is computed as the affine transformation a ( out ) := W ( out ) a ( out 1) + b ( out ) (11) Boosting is a general machine learning technique where several weak estimators are combined to form a strong estimator.",
"The authors used FFNNs with a single hidden layer of 100 units and rectified linear unit (ReLU) activation.",
"The boosting algorithm AdaBoost.R2 (Drucker, 1997) was used to train the ensemble (one per affective dimension).",
"Our re-implementation copies their technical set-up 7 exactly using scikit-learn .",
"The approaches introduced in Section 3.1 and Section 2 vary largely in their methodological foundations, i.e., they comprise semi-supervised and supervised machine learning techniquesboth statistical and neural ones.",
"Yet, they all have in common that they treat the prediction of the different emotional dimensions as separate tasks.",
"That is, they fit one individual model per VAD dimension without sharing parameters between them.",
"In contradistinction, the key feature of our approach is that we fit a single FFNN model to 7 Original settings available at https://github.",
"predict all VAD dimensions jointly , thus applying multi-task learning to word emotion induction.",
"Hence, we treat the prediction of Valence, Arousal and Dominance as three independent tasks.",
"Our multi-task learning neural network (MTLNN) (de-picted in Figure 2) has an output layer of three units such that each output unit represents one of the VAD dimensions.",
"However, the activation in our two hidden layers (of 256 and 128 units, respectively) is shared across all VAD dimensions, and so are the associated weights and biases.",
"Thus, while we train our MTLNN model it is forced to learn intermediate representations of the input which are generally informative for all VAD dimensions.",
"This serves as a form of regularization, since it becomes less likely for our model to fit the noise in the training set as noise patterns may vary across emotional dimensions.",
"Simultaneously, this has an effect similar to an increase of the training size, since each sample now leads to additional error signals during backpropagation.",
"Intuitively, both properties seem extremely useful for relatively small-sized emotion lexicons (see Section 4 for empirical evidence).",
"The remaining specifications of our model are as follows.",
"We use leaky ReLU activation (LReLU) as nonlinearity (Maas et al., 2013).",
"with := .",
"01 for our experiments.",
"For regularization, dropout (Srivastava et al., 2014) is applied during training with a probability of .",
"2 on the embedding layer and .",
"5 on the hidden layers.",
"We train for 15 , 000 iterations (well beyond convergence on each data set we use) with the ADAM optimizer (Kingma and Ba, 2015) of .",
"001 base learning rate, 1912 batch size of 128 and Mean-Squared-Error loss.",
"The weights are randomly initialized (drawn from a normal distribution with a standard deviation . 001 ) and biases are uniformly initialized as .",
"01 .",
"Tensorflow is used for implementation.",
"In this section, we first validate our assumption that MTL is superior to single-task learning for word emotion induction.",
"Next, we compare our proposed MTLNN model in a large-scale evaluation experiment.",
"Performance figures will be measured as Pearson correlation ( r ) between our automatically predicted values and human gold ratings.",
"The Pearson correlation between two data series X = x 1 , x 2 , ..., x n and Y = y 1 , y 2 , ..., y n takes values between +1 (perfect positive correlation) and 1 (perfect negative correlation) and is computed as r xy := P ni =1 ( x i x )( y i y ) pP ni =1 ( x i x ) 2 pP ni =1 ( y i y ) 2 (13) where x and y denote the mean values for X and Y , respectively.",
"The main hypothesis of this contribution is that an MTL set-up is superior to single-task learning for word emotion induction.",
"Before proceeding to the large-scale evaluation of our proposed model, we will first examine this aspect of our work.",
"For this, we use the following experimental setup: We will compare the MTLNN model against its single-task learning counterpart (SepNN).",
"SepNN simultaneously trains three separate neural networks where only the input layer, yet no parameters of the intermediate layers are shared across the models.",
"Each of the separate networks is identical to MTLNN (same layers, dropout, initialization, etc.), yet has only one output neuron, thus modeling only one of the three affective VAD dimensions.",
"SepNN is equivalent to fitting our proposed model (but with only one output unit) to the different VAD dimensions individually, one after the other.",
"Yet, training these separate networks simultaneously (not jointly!) makes both approaches, MTLNN and SepNN, easier to compare.",
"small, the latter relatively large; see Table 1) using the following set-up: for each gold lexicon and model, we randomly split the data 9 / 1 and train for 15 , 000 iterations on the larger split (the same number of steps is used for the main exper-iment).",
"After each one-thousand iterations step, model performance is tested on the held-out data.",
"This process will be repeated 20 times and the performance figures at each one-thousand iterations step will be averaged.",
"In a final step, we will average the results for each of the three emotional dimensions and only plot this average value.",
"The results of this experiment are depicted in Figure",
"3. First of all, each combination of model and data set displays a satisfactory performance of at least r .",
"75 after 15,000 steps compared to previous work (see below).",
"Overall, performance is higher for the smaller EN lexicon.",
"Although counterintuitive (since smaller lexicons lead to fewer training samples), this finding is consistent with prior work (Sedoc et al., 2017; Li et al., 2017) and is probably related to the fact that smaller lexicons usually comprise a larger portion of strongly emotion-bearing words.",
"In contrast, larger lexicons add more neutral words which tend to be harder to predict in terms of correlation.",
"As hypothesized, the MTLNN model does indeed outperform the single task model on both data sets.",
"Our data also suggest that the gain from the MTL approach is larger on smaller data sets (again in concordance with our expectations).",
"Figure 3 reveals that this might be due to the regularizing effect of MTL, since the SepNN model shows signs of overfitting on the EN data set.",
"Yet, even 1913 Language Data Embeddings LinReg RidgReg TL Densifier ensembleNN MTLNN English EN+ GOOGLE 0.696 0.696 0.631 0.622 0.728 0.739 *** English EN+ COMMON 0.719 0.719 0.659 0.652 0.762 0.767 *** English EN+ WIKI 0.666 0.666 0.591 0.584 0.706 0.712 *** English EN GOOGLE 0.717 0.732 0.723 0.712 0.688 0.810 *** English EN COMMON 0.731 0.741 0.741 0.726 0.717 0.824 *** English EN WIKI 0.656 0.667 0.674 0.665 0.681 0.777 *** Spanish ES WIKI 0.698 0.709 0.704 0.690 0.700 0.804 *** Spanish ES+ WIKI 0.693 0.694 0.603 0.598 0.766 0.778 *** German DE WIKI 0.709 0.719 0.714 0.710 0.700 0.801 *** Chinese ZH WIKI 0.716 0.717 0.586 0.599 0.737 0.744 ** Polish PL WIKI 0.650 0.650 0.577 0.553 0.687 0.712 *** Italian IT WIKI 0.656 0.665 0.672 0.659 0.630 0.751 *** Portuguese PT WIKI 0.673 0.684 0.685 0.678 0.672 0.768 *** Dutch NL WIKI 0.651 0.652 0.559 0.532 0.704 0.730 *** Indonesian ID WIKI 0.581 0.586 0.581 0.576 0.575 0.660 *** Average 0.638 0.659 0.611 0.605 0.676 0.728 *** Table 4: Results of our main experiment in averaged Pearson correlation; best result per condition (in rows) in bold, second best result underlined; significant difference (paired two-tailed t -test) over the second best system marked with *, **, or *** for p < .",
"when the separate model does not overfit (as on the EN+ lexicon), MTLNN reveals better results.",
"Although SepNN needs fewer training steps before convergence, the MTLNN model trains much faster, thus still converging faster in terms of runtime (about a minute on a middle-class GPU).",
"This is because MTLNN has only about a third as many parameters as the separate model SepNN.",
"We combined each of the selected lexicon data sets (Table 1) with each of the applicable publicly available embedding models (Section 2; the embedding model provided by Sedoc et al. (2017) will be used separately) for a total of 15 conditions, i.e, the rows in Table",
"4. For each of these conditions, we performed a 10-fold cross-validation (CV) for each of the 6 methods presented in Section 3 such that each method is presented with the identical data splits.",
"8 For each condition, algorithm, and VA(D) dimension, we compute the Pearson correlation r between gold ratings and predictions.",
"For conciseness, we present only the average correlation over the respective affective dimensions in Table 4 (Va-lence and Arousal for ES+ and ZH, VAD for the others).",
"Note that the methods we compare ourselves against comprise the current state-of-the art in both polarity and emotion induction (as described in Section 2).",
"8 This procedure constitutes a more direct comparison than using different splits for each method and allows using paired t -tests.",
"As can be seen, our proposed MTLNN model outperforms all other approaches in each of the 15 conditions.",
"Regarding the average over all affective dimensions and conditions, it outperforms the second best system, ensembleNN, by more than 5% -points.",
"In line with our results from Section 4.1, those improvements are especially pronounced on smaller data sets containing one up to two thousand entries (EN, ES, IT, PT, ID) with close to 10% -points improvement over the respective second-best system.",
"Concerning the relative ordering of the affective dimensions, in line with former studies (Sedoc et al., 2017; Li et al., 2017), the performance figures for the Valence dimension are usually much higher than for Arousal and Dominance.",
"Using MTLNN, for many conditions, we see the pattern that Valence is about 10% -points above the VAD average, Arousal being 10% -points below and Dominance being roughly equal to the average over VAD (this applies, e.g., to EN, EN+ and IT).",
"On other data sets (e.g., PL, NL and ID), the ordering between Arousal and Dominance is less clear though Valence still stands out with the best results.",
"We observe the same general pattern for the reference methods, as well.",
"Concerning the comparison to Sedoc et al. (2017), arguably one of most related contributions, they report a performance of r = .",
"768 for Valence and .",
"582 for Arousal on the EN+ data set in a 10-fold CV using their own embeddings.",
"In contrast, MTLNN using the COMMON model achieves r = .",
"870 and .",
"674 in the same set-upabout 10% 1914 Valence Arousal Dominance MTLNN EN .918 .730 .825 MTLNN EN+ .870 .674 .758 ISR EN EN+ .953 .759 .795 SHR EN+ .914 .689 .770 Table 5: Comparison of the MTLNN model against inter-study reliability (ISR) between the EN and the EN+ data set and split-half reliability (SHR) of the EN+ data set (in Pearson correlation).",
"points better on both dimensions.",
"However, the COMMON model was trained on much more data than the embeddings Sedoc et al. (2017) use.",
"For the most direct comparison, we also repeated this experiment using their embedding model (GIGA ).",
"We find that MTLNN still clearly outperforms their results with r = .",
"814 for Valence and .",
"607 for Arousal.",
"9 MTLNN achieves also very strong results in direct comparison to human performance (see Table 5).",
"Warriner et al. (2013) (who created EN+) report an inter-study reliability (ISR; i.e., the correlation of the aggregated ratings from two different studies) between the EN and the EN+ lexicon of r = .",
"953 , .",
"759 and .",
"795 for VAD, respectively.",
"Since EN is a subset of EN+, we can compare these performance figures against our own results on the EN data set where we achieved r = .",
"918 , .",
"730 and .",
"825 , respectively.",
"Thus, our proposed method did actually outperform human reliability for Dominance and is competitive for Valence and Arousal, as well.",
"This general observation is also backed up by split-half reliability data (SHR; i.e., when randomly splitting all individual ratings in two groups and averaging the ratings within each group, how strong is the correlation between these averaged ratings?).",
"For the EN+ data set, Warriner et al. (2013) report an SHR of r = .",
"914 , .",
"689 and .",
"770 for VAD, respectively.",
"Again, our MTLNN model performs very competitive with r = .",
"870 , .",
"674 and .",
"758 , respectively using the COMMON embeddings.",
"In this paper, we propose multi-task learning (MTL) as a simple, yet surprisingly efficient method to improve the performance and, at the same time, to deal with existing data limitations",
"9 We also clearly outperform their results for the NL and ES+ data sets.",
"For these cases, our embedding models were similar in training size.",
"in word emotion inductionthe task to predict a complex emotion score for an individual word.",
"We validated our claim that MTL is superior to single-task learning by achieving better results with our proposed method in performance as well as training time compared to its single-task counterpart.",
"We performed an extensive evaluation of our model on 9 typologically diverse languages, using different kinds of word embedding models for a total 15 conditions.",
"Comparing our approach to state-of-the-art methods from word polarity and word emotion induction, our model turns out to be superior in each condition, thus setting a novel state-of-the-art performance for both polarity and emotion induction.",
"Moreover, our results are even competitive to human annotation reliability in terms of inter-study as well as split-half reliability.",
"Since this contribution was restricted to the VAD format of emotion representation, in future work we will examine whether MTL yields similar gains for other representational schemes, as well.",
"We would like to thank the Positive Psychology Center, University of Pennsylvania for providing us with the embedding model used in Sedoc et al. (2017), Johannes Hellrich, JULIE Lab, for insightful discussions, and the reviewers for their valuable comments."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"objective",
"method",
"objective",
"result",
"objective",
"other"
] |
[
"Distant supervision based methods for entity and relation extraction have received increasing popularity due to the fact that these methods require light human annotation efforts.",
"In this paper, we consider the problem of shifted label distribution , which is caused by the inconsistency between the noisy-labeled training set subject to external knowledge graph and the human-annotated test set, and exacerbated by the pipelined entity-then-relation extraction manner with noise propagation.",
"We propose a joint extraction approach to address this problem by re-labeling noisy instances with a group of cooperative multiagents.",
"To handle noisy instances in a fine-grained manner, each agent in the cooperative group evaluates the instance by calculating a continuous confidence score from its own perspective; To leverage the correlations between these two extraction tasks, a confidence consensus module is designed to gather the wisdom of all agents and re-distribute the noisy training set with confidence-scored labels.",
"Further, the confidences are used to adjust the training losses of extractors.",
"Experimental results on two real-world datasets verify the benefits of re-labeling noisy instance, and show that the proposed model significantly outperforms the state-of-the-art entity and relation extraction methods.",
"The extraction of entities and relations has long been recognized as an important task within natural language processing, as it facilitates text understanding.",
"The goal of the extraction task is to identify entity mentions, assign predefined entity types, and extract their semantic relations from text corpora.",
"For example, given a sentence Washington is the president of the United States of America , Corresponding author.",
"an extraction system will find a PRESIDENT OF relation between PERSON entity Washington and COUNTRY entity United States of America .",
"A major challenge of the entity and relation extraction task is the absence of large-scale and domain-specific labeled training data due to the expensive labeling efforts.",
"One promising solution to address this challenge is distant supervision (DS) (Mintz et al., 2009; Hoffmann et al., 2011), which generates labeled training data automatically by aligning external knowledge graph (KG) to text corpus.",
"Despite its effectiveness, the aligning process introduces many noisy labels that degrade the performance of extractors.",
"To alleviate the introduced noise issue of DS, extensive studies have been performed, such as using probabilistic graphical models (Surdeanu et al., 2012), neural networks with attention (Zeng et al., 2015; Lin et al., 2016) and instance selector with reinforcement learning (RL) (Qin et al., 2018; Feng et al., 2018).",
"However, most existing works overlooked the shifted label distribution problem (Ye et al., 2019), which severely hinders the performance of DS-based extraction models.",
"Specifically, there is a label distribution gap between DS-labeled training set and human-annotated test data, since two kinds of noisy labels are introduced and they are subject to the aligned KG: (1) False Positive: unrelated entity pair in the sentence while labeled as relations in KG; and (2) False Negative: related entity pair while neglected and labeled as NONE.",
"Existing denoising works assign low weights to noisy instances or discard false positives while not recovering the original labels, leaving the shifted label distribution problem unsolved.",
"Moreover, most denoising works assume that the target entities have been extracted, i.e., the entity and relation extraction is processed in a pipe-lined manner.",
"By extracting entities first and then classifying predefined relations, the entity extraction Confidence Evaluators Re-distribution with Confidence-Scored Labels Entity-ViewAgents Noisy Training Dataset Positive Instances False Positives Relation-ViewAgents Confidence-Scored Labels Positive Instances Negative Instances False Negatives False Positives Partial False Positive Partial False Negatives Reward Confidence Consensus Action Action Base Extractors Iteratively Re-Train Negative Instances False Negatives Entity Classifier Relation Classifier Figure 1: Overview of the proposed method.",
"errors will be propagated to the relation extractor, introducing more noisy labels and exacerbating the shifted label problem.",
"Besides, there are some correlations and complementary information between the two extraction tasks, which are under-utilized but can provide hints to reduce noises more precisely, e.g., it is unreasonable to predict two COUNTRY entities as the relation PRESIDENT OF .",
"In this paper, to reduce the shifted label distribution gap and further enhance the DS-based extraction models, we propose a novel method to re-label the noisy training data and jointly extract entities and relations.",
"Specifically, we incorporate RL to re-label noisy instances and iteratively retrain entity and relation extractors with adjusted labels, such that the labels can be corrected by trial and error.",
"To leverage the correlations between the two extraction tasks, we train a group of cooperative multiagents to evaluate the instance confidence from different extraction views.",
"Through a proposed confidence consensus module, the instances are re-labeled with confidence-scored labels, and such confidence information will be used to adjust the training loss of extractors.",
"Finally, the performances of extractors are refined by exploring suitable label distributions with iterative re-training.",
"Empirical evaluations on two real-world datasets show that the proposed approach can effectively help existing extractors to achieve remarkable extraction performance with noisy labels, and the agent training is efficient with the help of correlations between these two extraction tasks.",
"In this research, we aim to refine entity extractor and relation extractor trained with DS, by incorporating a group of cooperative multiagents.",
"Formally, given a DS training corpus D = { s 1 , . . . , s n } , an entity extractor (cid:48) e and a relation extractor (cid:48) r trained on D are input into the multiagents.",
"The agents re-distribute D with confidence-scored labels and output two refined extractors e and r using the adjusted labels.",
"Towards this purpose, we model our problem as a decentralized multiagents RL problem, where each agent receives local environmental observation and takes action individually without inferring the policies of other agents.",
"It is hard to directly evaluate the correctness of adjusted noisy labels since we do not know the gold training label distributions suitable to the test set.",
"Nonetheless, we can apply RL to indirectly judge the re-labeling effect by using performance scores on an independent validation set as rewards, which is delayed over the extractor re-training.",
"Further, the decentralization setting allows the interaction between the distinct information of entity and relation extractors via intermediate agents.",
"As shown in Figure 1, a group of agents acts as confidence evaluators, and the external environment consists of training instances and classification results of extractors.",
"Each agent receives a private observation from the perspective of entity extractor or relation extractor, and makes an independent action to compute a confidence score of the instance.",
"These actions (confidence scores) will then be considered together by the confidence consensus module, which determines whether the current sentence is positive or negative and assigns a confidence score.",
"Finally, the updated confidences are used to retrain extractors, the performance score on validation set and the consistent score of the two extractors are combined into rewards for agents.",
"The proposed method can be regarded as a postprocessing plugin for existing entity and relation extraction model.",
"That is, we design a general framework of the states, actions and rewards by reusing the inputs and outputs of the extractors.",
"A group of cooperative multiagents are used to evaluate the confidence of each instance.",
"These multiagents are divided into two subgroups, which act from the perspective of entity and relation respectively.",
"There can be multiple agents in each subgroup for the purpose of scaling to larger observation space and action space for better performance.",
"Next, we will detail the states, actions and rewards of these agents.",
"States The states S e for entity-view agents and S r for relation-view agents represent their own viewpoint to evaluate the instance confidence.",
"Specifically, entity-view agents evaluate sentence confidence according to three kinds of information: current sentence, the entity extraction results (typed entity) and the noisy label types.",
"Similarly, relation-view agents make their decisions depending on the current sentence, the relation types from relation extractor and the noisy label types from DS.",
"Most entity and relation extractors encode the semantic and syntactic information of extracted sentences into low-dimension embeddings as their inputs.",
"For entity types and relation types, we also encode them into embeddings and some extractors have learned these vectors such as CoType (Ren et al., 2017).",
"Given reused extractors, we denote the encoded sentence vector as s , the extracted type vector as t e and t r for entity and relation respectively, and DS type vectors as t ed and t rd for entity and relation respectively.",
"We reuse the sentence and type vectors of base extractors to make our approach lightweight and pluggable.",
"Finally, we average the extracted and DS type embeddings to decrease the size of observation space, and concatenate them with the sentence embedding s to form the states S e and S r for entity/relation agents respectively as follows: S e = s (cid:107) ( t e + t ed ) / 2 , S r = s (cid:107) ( t r + t rd ) / 2 , (1) Note that we have encoded some semantics into the type vectors, e.g., the margin-based loss used in CoType enforces the type vectors are closer to their candidate type vectors than any other noncandidate types.",
"Intuitively, in the representation spaces, the average operation leads in the midpoint of extracted type vector and DS type vector, which partially preserves the distance property among the two vectors and other type vectors, so that helps form distinguishable states.",
"Actions To assign confidence in a fine-grained manner and accelerate the learning procedure, we adopt a continuous action space.",
"Each agent uses a neural policy network to determine whether the current sentence is positive (conform with the extracted type t i ) or negative (None type) and computes a confidence score c .",
"We model this action as a conditional probability prediction, i.e., estimate the probability as confidence given by the extracted type t i and the current state S : c = p ( positive | t i , , S ) .",
"We adopt gated recurrent unit (GRU) as policy network, which outputs the probability value using sigmoid function.",
"A probability value (confidence score) which is close to 1/0 means that the agent votes a sentence as positive/negative with a high weight.",
"To handle huge state spaces (e.g., there are thousands of target types in our experimental dataset) and make our approach scalable, here we divide and conquer the state space by using more than one agent in entity-view and relation-view groups.",
"The target type set is divided equally by agent number and each agent only is in charge of a part of types.",
"Based on the allocation and DS labels, one sentence is evaluated by only one relation agent and two entity agents at a time, meanwhile, the other agents are masked.",
"Re-labeling with Confidence Consensus To leverage the wisdom of crowds, we design a consensus strategy for the evaluated confidences from multiagents.",
"This is conducted by two steps: gather confidences and re-label with confidence score.",
"Specifically, we calculate an averaged score as c = c sum / 3 , where c sum is the sum of all agent confidences and the dividing means three agents evaluated the present sentence due to the above masking action strategy.",
"Then we label the current sentence as negative (None type) with confidence C = 1 c if c 0 .",
"5 , otherwise we label the current sentence as positive (replace noisy label with extracted type) with confidence C = c .",
"This procedure can be regarded as weighted voting and re-distribute the training set with confidence-scored labels as shown in the right part of Figure 1, where some falsely labeled instances are put into intended positions or assigned with low confidences.",
"Rewards The reward of each agent is composed of two parts: shared global reward g expressing correlations among sub-tasks, and separate local rewards restricting the reward signals to different three agents for different sentences (recall that we evaluate each sentence by different agents w.r.t their responsible types).",
"Specifically, the global reward g can give hints for denoising and here we adopt a general, translation-based triple score as used in TransE (Bordes et al., 2013) g = || t 1 + t r t 2 || , where t 1 , t r and t 2 are embeddings for triple ( E 1 , R, E 2 ) and pre-trained by TransE.",
"The score is used to measure the semantic consistency of each triple and can be easily extended with many other KG embedding methods (Wang et al., 2017).",
"As for the separate local reward, we use F1 scores F e 1 and F r 1 to reflect the extractor performance, which are gained by entity extractor and relation extractor on an independent validation dataset 1 respectively.",
"Finally, to control the proportions of two-part rewards, we introduce a hyper-parameter , which is shareable for ease of scaling to multiple agents as: r e = F e 1 g, r r = F r 1 g.",
"With the evaluated confidences and re-labeled instances, we adjust the training losses of entity extractor and relation extractor to alleviate the performance harm from noise and shifted label distribution.",
"Denote the original loss of extractor as (cid:96) , the new loss (cid:96) (cid:48) is adjusted by an exponential scaling factor and confidence C as : (cid:96) (cid:48) = C (cid:96) .",
"Intuitively, a small confidence score C and a large indicate that the current instance has almost no impact on the model optimization.",
"This can alleviate 1 To gain a relatively clean data, we randomly select 20% data from the original training set, extract them using pretrained CoType model and retain only one instance for each sentence whose DS label is the same as the extracted label.",
"Algorithm 1 Training Framework for Extractors Input: Noisy training data D , pre-trained entity extractor (cid:48) e , pre-trained relation extractor (cid:48) r Output: refined entity/relation extractor e , r 1: pre-train policy networks of agents based on (cid:48) e and (cid:48) r 2: init: best F 1 e F 1( (cid:48) e ) , best F 1 r F 1( (cid:48) r ) 3: for epoch i = 1 N do 4: init: current extractors parameters e (cid:48) e , r (cid:48) r 5: for batch d i D do 6: extractors generate S e / S r as Equ.",
"(1) 7: agents take actions (confidences) 8: redistribute instances with confidences 9: train e / r with scaled losses (cid:96) (cid:48) e / (cid:96) (cid:48) r 10: calculate rewards r e and r r as Equ.",
"(2) 11: end for 12: if F 1( e ) > F 1 e then F 1 e F 1( e ) , e e 13: if F 1( r ) > F 1 r then F 1 r F 1( r ) , r r 14: end for side-effects caused by noises and prevent the gradient being dominated by noisy labels, especially for those with divergent votes since the averaging in confidence consensus module leads to a small C .",
"Pre-training Many RL-based models introduce pre-training strategies to refine the agent training efficiency (Qin et al., 2018; Feng et al., 2018).",
"In this study, we pre-train our models in two aspects: (1) we first pre-train entity and relation extractors to be refined as environment initialization, which is vital to provide reasonable agent states (embed-dings of sentences and extracted types).",
"(2) we then pre-train the policy networks of agents to gain a preliminary ability to evaluate confidence.",
"In order to guide the instance confidence evaluation, we extract a small part of the valid data.",
"The relatively clean DS type labels of the valid data are used to form states.",
"The binary label is assigned according to the valid data and the policy networks are pre-trained for several epochs.",
"Although the binary labels from valid data are not exactly the continuous confidence, the policy networks gain a better parameter initialization than random initialization by this approximate training strategy.",
"Iterative Re-training With the pre-trained extractors and policy networks, we retrain extractors and agents as Algorithm 1 detailed.",
"The agents refine extractors in each epoch and we record parameters of extractors that achieve best F1 performance.",
"For each data batch, entity and relation extractor perform extraction, form the states S e and S r as Equation (1), and send them to entity and relation agents respectively.",
"Then agents take actions (eval-uate confidences) and redistribute instance based on confidences consensus module (Section 2.2).",
"Finally extractors are trained with confidences and give rewards as Equation (2).",
"Curriculum Learning for Multiagents It is dif-ficult to learn from scratch for many RL agents.",
"In this study, we extend the curriculum learning strategy (Bengio et al., 2009) to our cooperative multiagents.",
"The motivation is that we can leverage the complementarity of the two tasks and enhance the agent exploration by smoothly increasing the policy difficulty.",
"To be more specific, we maintain a priority queue and sample instances ordered by their reward values.",
"Once the reward of current sentence excesses the training reward threshold r threshold or the queue is full, we then learn agents policies using Proximal Policy Optimization (PPO) (Schul-man et al., 2017) algorithm, which achieves good performances in many continuous control tasks.",
"Algorithm 2 details the training procedure.",
"Datasets We evaluate our approach on two public datasets used in many extraction studies (Pyysalo et al., 2007; Ling and Weld, 2012; Ren et al., 2017): Wiki-KBP : the training sentences are sampled from Wikipedia articles and the test set are manually annotated from 2013 KBP slot filling task; BioInfer : the dataset is sampled and manually annotated from biomedical paper abstracts.",
"The two datasets vary in domains and scales of type set, detailed statistics are shown in Table 1.",
"Baselines For relation extraction, we compare with both pipe-lined methods and joint extraction methods: MintZ (Mintz et al., 2009) is a feature-based DS method using a logistic classifier; MultiR (Hoffmann et al., 2011) models noisy DS labels with multi-instance multi-label learning; DS-Joint (Li and Ji, 2014) jointly extracts entities and relations using structured perceptron; FCM (Gormley et al., 2015) introduces a neural model to learn linguistic compositional representations; PCNN (Zeng et al., 2015) is an effective relation extraction architecture with piece-wise convolution; CoType (Ren et al., 2017) is a state-of-the-art joint extraction method leveraging representation learning for both entity and relation types; RRL-PCNN (Qin et al., 2018) is a state-of-the-art RL-based method, which takes PCNN as base extractor and can also be a plugin to apply to different relation extractors; ARNOR (Jia et al., 2019) is a state-of-the-art de-noising method, which proposes attention regulation to learn relation patterns; BA-fix-PCNN (Ye et al., 2019) greatly improves the extraction performance by introducing 20% samples of the test set and estimate its label distribution to adjust the classifier of PCNN.",
"For entity extraction methods, we compare with a supervised type classification method, HYENA (Yosef et al., 2012); a heterogeneous partial-label Wiki-KBP BioInfer Methods S-F1 Ma-F1 Mi-F1 S-F1 Ma-F1 Mi-F1 HYENA 0.26 0.43 0.39 0.52 0.54 0.56 FIGER 0.29 0.56 0.54 0.69 0.71 0.71 WSABIE 0.35 0.55 0.50 0.64 0.66 0.65 PLE 0.37 0.57 0.53 0.70 0.71 0.72 CoType 0.39 0.61 0.57 0.74 0.76 0.75 MRL-CoType ( improvements) 0.42 7.2e-3 0.64 1.1e-2 0.60 8.3e-3 0.77 6.5e-3 0.79 1.3e-2 0.78 7.4e-3 (+7.69%) (+4.92%) (+5.26%) (+4.05%) (+3.95%) (+4.00%) Table 2: NER performance on two datasets, 3-time average results with standard deviations are reported.",
"Multiagents Setup To evaluate the ability of our approach to refine existing extractors, we choose two basic extractors for our M ultiagent RL approach, CoType and PCNN, and denote them as MRL-CoType and MRL-PCNN respectively.",
"Since PCNN is a pipe-lined method, we reuse a pre-trained and fixed CoType entity extractor, and adopt PCNN as base relation extractor to adapt to the joint manner.",
"For the CoType, we use the implementation of the original paper 2 , and adopt the same sentence dimension, type dimension and hyper-parameters settings as reported in (Ren et al., 2017).",
"For the PCNN, we set the number of kernel to be 230 and the window size to be 3.",
"For the KG embeddings, we set the dimension to be 50 and pre-train them by TransE.",
"We use Stochasitc Gradient Descent and learning rate scheduler with cosine annealing to optimize both the agents and extractors, the learning rate range and batch size is set to be [1e-4, 1e-2] and 64 respectively.",
"Wiki-2 https://github.com/INK-USC/DS-RelationExtraction",
"KBP/BioInfer datasets respectively, according to their scales of type sets.",
"For the multi-agents, due to the limitation of RL training time, we set the PPO parameters as default RLlib setting and perform preliminary grid searches for other parameters.",
"For the PPO algorithm, we set the GAE lambda parameter to be 1.0, the initial coefficient for KL divergence to be 0.2.",
"The loss adjusting factor is searched among { 1, 2, 4 } and set to be 2, the reward control factors is searched among { 2e-1, 1, 2, 4 } and set to be 2.",
"For all agents, the dimensions of GRU is searched among { 32, 64 } , and the setting as 64 achieved sightly better performance than setting as 32, while the larger dimension setting leads to higher memory overhead for each agent.",
"Hence we set it to be 32 to enable a larger scale of the agents.",
"We adopt the Macro-F1, Micro-F1 and Strict-F1 metrics (Ling and Weld, 2012) in the entity extraction evaluation.",
"For Strict-F1, the entity prediction is considered to be strictly correct if and only if when the true set of entity tags is equal to the prediction set.",
"The results are shown in Table 2 and we can see that our approach can effectively refine the base extractors and outperform all baseline Wiki-KBP BioInfer Settings Precision(%) Recall(%) F1(%) Precision(%) Recall(%) F1(%) Curriculum 41.7 0.19 41.5 0.16 41.6 0.17 59.5 0.21 43.7 0.18 49.8 0.20 Joint (w/o curriculum) 41.3 0.22 40.9 0.20 41.1 0.21 58.7 0.24 42.6 0.19 48.5 0.23 Separate (w/o joint) 38.8 0.24 40.5 0.27 38.4 0.25 54.7 0.27 41.3 0.23 47.6 0.26 Table 4: Ablation results of the MRL-CoType for end-to-end relation extraction.",
"methods on all metrics.",
"Note that the refinements on BioInfer is significant (t-test with p < 0 . 05 ) even though the BioInfer has a large entity type set (2,200 types) and the base extractor CoType has achieved a high performance (0.74 S-F1), which shows that our agents are capable of leading entity extractors towards a better optimization with noisy.",
"Another comparison is the end-to-end relation extraction task, we report the precision, recall and F1 results in Table 3 and it illustrates that:",
"(1) Our method achieves best F1 for Wiki-KBP, outperforms all baselines on all metrics for BioInfer data, and significantly refines both the two base extractors, PCNN and CoType (t-test with p < 0 . 05 ),",
"demonstrating the effectiveness of our approach.",
"(2) The improvements for CoType are larger than PCNN.",
"Since CoType is a joint extraction model and leverages multi-agents better than the single-task extractor with fixed entity extractor.",
"This shows the benefit of correlations between the two extraction tasks.",
"(3) Using the same base relation extractor, the MRL-PCNN achieves significantly better improvements than RRL-PCNN (t-test with p < 0 . 05 ).",
"Besides, the precision of RRL-PCNN method is relatively worse than recall, which is mainly caused by the noise propagation of entity extraction and its binary discard-or-retain action.",
"By contrast, our model achieves better and more balanced results by leveraging the cooperative multiagents with fine-grained confidences.",
"(4) The MRL-PCNN gains comparable performance with BA-Fix-PCNN, which leverages the additional information from the test set to adjust softmax classifier.",
"This verifies the effectiveness and the robustness of the proposed RL-based relabeling method to reduce the shifted label distribution gap without knowing the test set.",
"To evaluate the impact of curriculum learning strategy and joint learning strategy of our method, we compare three training settings: curriculum learn-Curriculum",
"ing , standard training procedure as described in Section 2.3; joint multiagents training without curriculum learning (randomly sample training in-stances); and separate training without the participation of other agents using a pipeline manner, i.e., train an entity agent with only entity extractor and train a relation agent with only relation extractor.",
"The end-to-end relation extraction results are reported in Table 4.",
"The curriculum setting and the joint setting achieve much better results than the separate training setting.",
"This shows the superiority of cooperative multi-agents over single view extraction, which evaluates confidences with limited information.",
"Besides, the curriculum setting achieves better results than the joint setting, especially on the BioInfer data, which has a larger type set and is more challenging than Wiki-KBP.",
"This indicates the effectiveness of the curriculum learning strategy, which enhances the model ability to handle large state space with gradual exploration.",
"Training efficiency is an important issue for RL methods since the agents face the exploration-exploitation dilemma.",
"We also compare the three settings from the view of model training.",
"Figure 2 reports the average rewards for an entity agent and a relation agent on Wiki-KBP respectively.",
"A high average reward indicates that the agent is trained 0 5 10 15 20 25 30 relation mentions entity mentions p r o p o r t i o n ( % ) N-to-P N-to-P-divergent P-to-N P-to-N-divergent Wiki-KBP 0 5 10 15 20 25 30 relation mentions entity mentions p r o p o r t i o n ( % ) BioInfer N-to-P N-to-P-divergent P-to-N P-to-N-divergent Figure 3: Proportions of re-labeled instances for MRL-CoType.",
"effectively since it made valuable decisions and received positive feedback.",
"From it we have the following observations: (1) The curriculum setting and the joint setting gain better performance than the separate training, which is consistent with the end-to-end extraction results.",
"The improvement comes from the mutual enhancement among agents, since the correlations between the two tasks can restrict the reward signals to only those agents involved in the success or failure on the task; (2) The curriculum learning achieves higher rewards than the other two settings with fewer epochs, since that the convergence to local optimum can be accelerated by smoothly increasing the instance difficulty, and the multiagents provide a regularization effect.",
"To gain insight into the proposed method, we conduct a statistic on the final re-labeled instances.",
"Figure 3 reports the results and shows that our approach identifies some noisy instances including both positives and negatives, and leverage them in a fine-grained manner comparing with discard-or-retain strategy.",
"Besides, the instances which are re-labeled from negatives to positives take a larger proportion than those with inverse re-labeling assignments, especially on Wiki-KBP data.",
"This is in accordance with the fact that many noisy labels are None in DS setting.",
"Note that some instances are re-labeled with divergent evaluations between entity-view and relation-view agents, which are usually get low confidences through the consensus module and have a small impact on the optimization with damping losses.",
"We further sample two sentences to illustrate the re-labeling processes.",
"On Table 5, the first sentence has a noisy relation label None , while the relation extractor recognizes it as country of birth relation.",
"Based on the extracted type, the relation-view agent evaluates it as a confidential positive instance due to the typical pattern born in in the sentence.",
"The entity-view agents also evaluate it as positive with relatively lower confidences, and finally the sentence is re-labeled as positive by the consensus module.",
"For the second sentence, agents disagree that it is positive.",
"With the help of diverse extraction information, the consensus module re-labels the instance with low confidence score, and further alleviates the performance harm by loss damping.",
"Many entity and relation extraction methods have been proposed with the pipelined fashion, i.e., perform named entity recognition (NER) first and then relation classification.",
"Traditional NER systems usually focus on a few predefined types with supervised learning (Yosef et al., 2012).",
"However, the expensive human annotation blocks the large-scale training data construction.",
"Recently, several efforts on DS and weak supervision (WS) NER extraction have been made to address the training data bottleneck (Yogatama et al., 2015; Yang et al., 2018).",
"For relation extraction, there are also many DS methods (Mintz et al., 2009; Min et al., 2013; Zeng et al., 2015; Han and Sun, 2016; Ji et al., 2017; Lei et al., 2018) and WS methods (Jiang, 2009; Ren et al., 2016; Deng et al., 2019) to address the limitation of supervised methods.",
"Our method can be applied for a large number of those extractors as a post-processing plugin since the DS and WS usually incorporate many noises.",
"A recent work CrossWeigh (Wang et al., 2019) estimates the label mistakes and adjusts the weights of sentences in the NER benchmark CoNLL03.",
"They focus on the noises of supervised gold standard labels while we focus on the noises of automatically constructed silver standard labels.",
"Moreover, we deal with the noises by considering the shifted label distribution problem, which is overlooked by most existing DS works.",
"In Ye et al. (2019), this issue is analyzed and authors improve performance significantly by using the distribution information from test set.",
"In this paper, we propose to use RL to explore suitable label distributions by re-distributing the training set with confidence-scored labels, which is practical and robust to label distribution shift since we may not know the distribution of test set in real-world applications.",
"Another extraction manner is joint extraction, such as methods based on neural network with parameter sharing (Miwa and Bansal, 2016), represen-Sentence 1, False Negative, Label: ( Bashardost [/person], None , Ghazni [/location]) Entity Extractor Relation Extractor Entity Agents Relation Agent Confidence Consensus Bashardost , an ethnic Hazara, was born in Ghazni province to a family of government employees.",
"tation learning (Ren et al., 2017) and new tagging scheme (Zheng et al., 2017).",
"However, these works perform extraction without explicitly handling the noises.",
"Our approach introduces multiagents to the joint extraction task and explicitly model sentence confidences.",
"As for the RL-based methods, in Zeng et al. (2018), RL agent is introduced as bag-level relation predictor.",
"Qin et al. (2018) and Feng et al. (2018) use agent as instance selectors to discard noisy instances in sentence-level.",
"Different from adopting a binary action strategy and only focus on false positives in these works, we adopt a continuous action space (confidence evaluation) and handle the noises in a fine-grained manner.",
"The binary selection strategy is also adopted in a related study, Reinforced Co-Training (Wu et al., 2018), which uses an agent to select instances and help classifiers to form auto-labeled datasets.",
"An important difference is that they select unlabeled instances while we evaluate noisy instances and relabel them.",
"More recently, HRL (Takanobu et al., 2019) uses a hierarchical agent to first identifies relation indicators and then entities.",
"Different from using one task-switching agent of this work, we leverage a group of multiagents, which can be a pluggable helper to existing extraction models.",
"To deal with the noise labels and accompanying shifted label distribution problem in distant supervision, in this paper, we propose a novel method to jointly extract entity and relation through a group of cooperative multiagents.",
"To make full use of each instance, each agent evaluates the instance confidence from different views, and then a confidence consensus module is designed to re-label noisy instances with confidences.",
"Thanks to the exploration of suitable label distribution by RL agents, the confidences are further used to adjust the training losses of extractors and the potential harm caused by noisy instances can be alleviated.",
"To demonstrate the effectiveness of the proposed method, we evaluate it on two real-world datasets and the results confirm that the proposed method can significantly improve extractor performance and achieve effective learning.",
"This work is supported by the National Natural Science Foundation of China (No.61602013), and the Shenzhen General Research Project (No. JCYJ20190808182805919)."
] | [
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"method",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"method",
"other",
"method",
"objective",
"abstain",
"abstain",
"objective",
"other"
] |
[
"The Out-of-Domain (OOD) intent classification is a basic and challenging task for dialogue systems.",
"Previous methods commonly restrict the region (in feature space) of In-domain (IND) intent features to be compact or simply-connected implicitly, which assumes no OOD intents reside, to learn discriminative semantic features.",
"Then the distribution of the IND intent features is often assumed to obey a hypothetical distribution (Gaussian mostly) and samples outside this distribution are regarded as OOD samples.",
"In this paper, we start from the nature of OOD intent classification and explore its optimization objective.",
"We further propose a simple yet effective method, named KNN-contrastive learning.",
"Our approach utilizes K-Nearest Neighbors (KNN) of IND intents to learn discriminative semantic features that are more conducive to OOD detection.",
"Notably, the density-based novelty detection algorithm is so well-grounded in the essence of our method that it is reasonable to use it as the OOD detection algorithm without making any requirements for the feature distribution.",
"Extensive experiments on four public datasets show that our approach can not only enhance the OOD detection performance substantially but also improve the IND intent classification while requiring no restrictions on feature distribution.",
"Code is available.",
"1 1 Introduction People are getting accustomed to talking with or sending some instructions to task-oriented dialog system in natural language to assist them in fulfilling work.",
"As the environment facing the dialogue systems becomes more open, there are more utterances with unknown or Out-of-Domain (OOD) intents that the dialog system does not know how to handle.",
"As shown in Figure 1, the chatbot encounters an unsupported intent (OOD intent) utterance Corresponding author.",
"in the third interaction.",
"It is significant to distinguish these utterances for the dialogue system because identifying the intents of the user determines whether subsequent actions can be carried out correctly.",
"To solve this problem, the existing methods roughly can be summarized into two categories according to whether extensive labeled OOD intent samples are used during training.",
"The first kind of method (use OOD samples during training) is represented by (Zheng et al., 2020; Zhan et al., 2021; Choi et al., 2021) which regards OOD intent classification as a (n+1)-class classification task that the extra ( n + 1) th class represents labeled OOD intent.",
"These methods may need additional large and time-consuming labeled Out-of-Domain samples.",
"Moreover, manually constructed OOD samples endowed with artificial inductive bias cannot cover all open classes in the actual environment so this kind of method have their limitations.",
"In this paper, we focus on another kind of method which involves two stages, to learn dis-5129",
"How to learn semantic features benefit for OOD detection?",
"Minimizing intra-class variance and maximizing inter-class variance, whose motivation are to facilitate detection by widening margin between IND and OOD intents, have always been regarded as the essence of solving this problem (Lin and Xu, 2019; Zeng et al., 2021a).",
"Then, Shu et al. (2017); Zhang et al. (2021); Xu et al. (2020); Zeng et al. (2021a) impose (or implicitly) Gaussian distribution into the distribution of the learned intent features to OOD detection.",
"In general previous methods implicitly assume the region of semantic features as compact or simply-connected region in feature space, which means OOD intents only exist between different IND classes and not within IND distribution so that tight IND semantic features can be helpful for OOD detection.",
"However, as shown in Figure 2",
"(a), the actual location of OOD intents in semantic space is not limited, they can appear between IND classes or within IND distribution.",
"We name those OOD intents that are distributed between different IND classes as inter OOD intents, and those within IND distribution or can be surrounded by a convex hull composed of local IND intent samples as intra OOD intents.",
"As shown in Figure 2",
"(b), for inter OOD intents, minimizing intra-class variance and maximizing inter-class variance can reduce the risk of being identified IND while this risk may be increased for intra OOD intents due to being closer to IND intents.",
"At the same time, we conduct Gaussian Hypothesis Testing on the IND semantic features distribution in the CLINC-FULL train set, which is learned by (Zeng et al., 2021a).",
"we find only 57% IND classes conform to Gaussian distribution, which illustrates the Gaussian assumption for OOD detection in the previous methods may not be reasonable, see Appendix A.1 for results on other datasets and details.",
"To solve these problems, we explicitly define the optimization objective of OOD intents classification using open space risk (Scheirer et al., 2012).",
"Compared with the previous methods only considering inter OOD intents, we propose a simple yet effective method to consider both intra and inter OOD intents.",
"We utilize k-nearest neighbors of IND intent sample as positive samples as shown in Figure 2",
"(c), and obtain more negative samples with the help of queue in MoCo (He et al., 2020) to learn discriminative semantic features.",
"We further analyze why our method can better reduce open space risk.",
"Intuitively, our method leaves more margin around OOD intents, which can ensure we employ the basic density-based method for OOD detection without any assumptions about the distribution.",
"We summarize our contributions as follows.",
"Firstly, following open space risk, we explicit the optimization objective of OOD intent classification to provide a paradigm for solving OOD intent classification.",
"Secondly, we analyze the limitation of existing methods and propose a novel method to better reduce both empirical risk and open space risk.",
"Thirdly, extensive experiments conducted on four challenging datasets show our approach achieves consistent improvements without restrictions on feature distribution.",
"Schlkopf et al. (2001); Tax and Duin (2004) regard Out-of-Domain detection as One-class classification problem so that to find a hyperplane or hypersphere by kernels in high-dimensional space to distinguish OOD, Scheirer et al. (2012) firstly proposes the definition of open space risk and formalize open set recognition as a constrained optimization problem.",
"Then to obtain better semantic representation, deep neural network has been brought into this field in recent years.",
"Bendale and Boult (2016) propose OpenMax model, using the scores from the penultimate layer of deep network, to distinguish OOD.",
"MSP (Hendrycks and Gimpel, 2017) presents a baseline based on maximum softmax probabilities to exhibit the ability of the network to distinguish between OOD and IND.",
"To enlarge the difference between IND and OOD, Liang et al. (2018) add temperature scaling based on MSP and adds perturbations to the inputs.",
"The above methods mainly focus on computer vision and assume (or implicitly) the feature region is compact (simply-connected region), many research works are carried out in natural language processing.",
"DOC (Shu et al., 2017) builds a multi-class classifier and selects a threshold to reject.",
"Further, they reduces the open space risk for rejection by tightening the decision boundaries of sigmoid functions with Gaussian fitting.",
"LMCL (Lin and Xu, 2019) learns discriminative deep features with margin loss.",
"Yan et al. (2020); Wan et al. (2018) model embeddings with a Gaussian mixture distribution to facilitate downstream outlier detection.",
"Xu et al. (2020); Zeng et al. (2021b); Podolskiy et al. (2021) assume the IND semantic feature distribution as Gaussian discriminant analysis (GDA) and identify Out-of-domain samples by Mahalanobis distances.",
"(Fei and Liu, 2016) reduce open space risk by decision boundaries and ABD (Zhang et al., 2021) propose to learn adaptive circular decision boundaries.",
"Very recently, Zeng et al. (2021a) propose a supervised contrastive learning objective to maximize inter-class variance and to minimize intra-class variance.",
"These methods also restrict the feature distribution in the feature learning stage or downstream detection stage and fail to solve Out-of-domain classification completely.",
"Contrastive learning, which can be traced back to (Hadsell et al., 2006), is widely used in unsupervised or self-supervised learning (He et al., 2020; Wang and Isola, 2020; Khosla et al., 2020).",
"With similarity by dot product, Gutmann and Hyvrinen (2010) propose InfoNCE loss to measure the similarities of sample pairs in semantic space.",
"To obtain more number of negative samples for contrastive learning, He et al. (2020) introduce Momentum Contrastive (MoCo) that builds a large and consistent dictionary facilitating contrastive unsupervised learning.",
"With the prevalence of pre-trained models (PTMs) (Qiu et al., 2020; Lin et al., 2021) in different fields, Dwibedi et al. (2021); Li et al. (2021) combine PTMs with contrastive learning paradigm, which adopts neighbors and uses MoCo or Memory Banks to obtain enough negative samples.",
"Open space risk We define open space O and open space risk as follow (Scheirer et al., 2012; Bendale and Boult, 2015).",
"Using IND training samples, open space can be defined as 2 : O = S (cid:91) x X ( x ) , (1) where is a local (and small) semantic space spanned by IND training sample x , X is the set of all IND training samples and S including both open space O and remaining space.",
"Consider a measurable recognizer (or discriminator) f that f ( x ) = 1( > 0) for the IND intents, otherwise f ( x ) = 0( < = 0) , probabilistic open space risk RO can be formed in terms of Lebesgue measure: RO ( f ) = (cid:82) O f ( x, ) dx (cid:82) S f ( x, ) dx .",
"Objective of OOD Intent Classification To identify OOD intents, we need to learn intent representations at first, which also ensures the classification quality of IND in addition to adapting to downstream detection.",
"Therefore we introduce an additional optimization objective as also suggested in (Scheirer et al., 2012; Bendale and Boult, 2015), 2 We also define a specific open space risk for limited OOD samples.",
"where is a hyper-parameter to balance empirical and open space risk and H is the function space.",
"To optimize above objective.",
"We first utilize the BERT (Devlin et al., 2019) to extract intent representation.",
"Given the i-th in-domain utterance, we get its contextual embeddings [[ CLS ] , T 1 , T 2 , ..., TN ] .",
"As suggested in (Zhang et al., 2021), we operate mean-pooling on these contextual token embeddings to obtain sentence semantic representation Z i : Z i = Mean-Pooling ([[ CLS ] , T 1 , ...T N ]) , (4) where Z i RH , N is the sequence length and H is the hidden dimension.",
"It is worth noting that due to the lack of OOD intent samples, we can not directly optimize the open space risk.",
"Previous methods indirectly reduce the open space risk by pulling together IND samples belonging to the same class and pushing apart samples from different classes.",
"However, such approaches may increase the risk of identifying intra OOD intents as IND based on the above analysis.",
"Intuitively, to reduce the risk that identifying intra OOD intents as IND, we do not need to pull together all IND intent samples belonging to the same class and just pull together k-nearest neighbors while pushing apart them from different class intent samples as shown in Figure 2",
"(c).",
"In order to achieve this goal, we get the KNN-contrastive loss L knn cl by rewriting the contrastive loss: L knn-cl = N (cid:88) i =1 1 |X k | (cid:88) z j X k log exp( z i z j ) (cid:80) z q I exp( z q z i ) , (6) where X k denotes the set of k-nearest neighbors of sample z i , I A (cid:83) { z j } , A is the set of samples whose classes are different from that of z j .",
"is the temperature hyper-parameter.",
"We further analyze how KNN-contrastive loss is benefit to inter OOD intents and intra OOD intents simultaneously, see Appendix A.5 and Appendix A.6 for more detailed and deeper analysis.",
"When conducting KNN-contrastive learning, we need to solve two problems:",
"a) large batch size, the more samples we can select, the more likely we to find k-nearest neighbors.",
"Meanwhile, we also need enough negatives to distinguish.",
"b) The k-nearest neighbors should keep consistent as they evolve during training, otherwise, KNN-contrastive train learning may be unstable.",
"Interestingly, these problems mentioned above are also those Momentum Contrast (MoCo) (He et al., 2020) wants to solve.",
"Following MoCo, we also maintain a queue containing IND samples and update it with features of the current batch while dequeuing the oldest features.",
"The queue decouples the size of samples from the batch size, allowing us to obtain more negative samples (benefit to reduce open space risk).",
"To maintain consistency, the features coming from the previous several batches are encoded by a slowly updating network (encoder), whose parameters are momentum-based average of the parameters from the query encoder (another net-work), see (He et al., 2020) for details.",
"Combing softmax cross-entropy loss and KNN-contrastive learning loss, the final finetune obejective to learn discriminative features as: L obj = L knn-cl + (1 ) L ce , (7) where is a hyper-parameter to balance empirical and open space risk.",
"To be closer to the realistic scenario, we prefer the detection algorithm downstream without assuming a potential distribution of the IND intents.",
"Therefore, we adopt a simple and universal detection algorithm LOF algorithm (Breunig et al., 2000) and compute LOF score following (Lin and Xu, 2019), see Appendix A.3 for specific calculation steps.",
"Our model architecture is shown in Figure 3.",
"In order to verify the effectiveness and generality of our method, we conduct experiments on four different and challenging real-world datasets.",
"The detailed statistics are shown in Appendix A.4.",
"CLINC-FULL (Larson et al., 2019) is a dataset specially designed for OOD detection, which consists of 150 classes from 10 domains.",
"This dataset includes 22500 IND utterances and 1200 OOD utterances.",
"CLINC-SMALL (Larson et al., 2019) is the CLINC-FULL variant, in which there are only 50 training utterances per each IND class.",
"This dataset includes 15000 IND utterances and 1200 OOD utterances.",
"BANKING (Casanueva et al., 2020) is a dataset about banking.",
"The dataset, covering 77 classes, consists of 9003, 1000 and 3080 utterances in training, validation and test sets respectively.",
"StackOverflow (Xu et al., 2015) is a dataset published in Kaggle.com.",
"This dataset has 20 classes and consists of 12000, 2000 and 6000 in training, validation and test sets respectively.",
"We extensively compare our method with the following OOD classification methods: MSP (Hendrycks and Gimpel, 2017), DOC (Shu et al., 2017), SEG (Yan et al., 2020), LMCL (Lin and Xu, 2019), Softmax (Zhan et al., 2021), OpenMax (Ben-dale and Boult, 2016), ADB (Zhang et al., 2021), SCL (Zeng et al., 2021a).",
"the backbone network.",
"We report the current best results of various methods on the corresponding datasets.",
"Softmax/LMCL learns discriminative features by softmax/large margin cosine loss and use additional detector such as LOF or GDA for detecting out-of-domain.",
"ADB (Zhang et al., 2021)/SCL (Zeng et al., 2021a) are also related to our method, however, the original paper does not report results in on some datasets.",
"We supplement results by running their released code.",
"For all datasets, we follow previous work (Zhang et al., 2021; Zeng et al., 2021a; Zhan et al., 2021) and group all OOD classes as one rejected class.",
"We calculate accuracy and F1-score in the same way as (Zeng et al., 2021a).",
"To better evaluate the ability of our method to distinguish IND and OOD intents, we calculate macro F1-score over IND classes and OOD classes, represented by F1-IND and F1-OOD respectively.",
"To comprehensively evaluate the performance of our model, we also compare accuracy score ( ACC-ALL ) and macro F1-score ( F1-ALL ) over all classes (IND and OOD classes).",
"Due to no OOD classes in the BANKING and StackOverflow, we follow the setting in (Zhang et al., 2021; Zhan et al., 2021).",
"After datasets are split into train, validation, and test respectively, we randomly sample 25%, 50%, and 75% of the intent classes and discard the remaining classes in the 5133 training and validation sets 3 .",
"The disposed classes are kept in the test set as OOD classes.",
"CLINC-FULL and CLINC-SMALL are constructed for OOD detection especially and the datasets themselves provide OOD classes.",
"We follow (Zeng et al., 2021a,b) and take the OOD class provided by datasets as the objective of detecting without dividing datasets additionally.",
"As a reminder, we do not use OOD classes during training in any cases.",
"To reduce the deviation, we use two basic distances, Euclidean and Cosine (more discussed in Appendix A.2), to calculate the LOF score.",
"For each distance, we take different random seeds to run 3 rounds, and we report the total average results.",
"We employ the BERT model (bert-uncased, with 12-layer transformer) implemented by Hugging-face Transformers 4 and adopt most of its suggested hyperparameters for finetuning.",
"We tried learning rate in {1e-5, 5e-5}, and training batch size is set 16 or 32.",
"Concerning contrastive learning, we tried the size of the queue in {6500, 7500, 8000} and the momentum update parameter m = 0.999.",
"We use the AdamW optimizer (Loshchilov and Hutter, 2019).",
"We select the LOF threshold by calculating the best macro F1-score and accuracy over IND classes on the validation set.",
"ALL experiments were conducted in the Nvidia Ge-Force RTX-2080 Graphical Card with 11G graphical memory.",
"The results in BANKING and StackOverflow are presented in Table 1, where the best results are highlighted in bold.",
"Compared with other baselines, our method consistently improves in all settings.",
"F1-OOD represents the F1-score of OOD class, and F1-IND is the macro F1-score of IND classes.",
"Our method achieves favorable performance simultaneously, which shows that our method improves the capability of detecting OOD intents without sacrificing the accuracy of IND classes classification.",
"The results in CLINC-FULL and CLINC-SMALL datasets are presented in Table 2.",
"Our method is also better than all other kinds of methods.",
"It is worth noting that F1-IND and F1-OOD have improved compared with SCL significantly.",
"We suppose this is due to the use of Momentum Contrast framework in our method, which obtains more negative samples by maintaining a queue (the capacity is much larger than batch size) so that fur-3 See more discusses in Appendix A.7.",
"ther pushes apart samples from different classes, as shown in Section 5.1.",
"To compare our method with the previous methods intuitively, we use t-SNE (Van der Maaten and Hinton, 2008) to visualize deep features of some intent samples sampled from CLINC-FULL learned by BERT, SCL, and our method.",
"Firstly, compared 5134 Methods BANKING StackOverflow ACC-ALL F1-ALL F1-OOD F1-IND ACC-ALL F1-ALL F1-OOD F1-IND 25% MSP 43.67 50.09 41.43 50.55 28.67 37.85 13.03 42.82 DOC 56.99 58.03 61.42 57.85 42.74 47.73 41.25 49.02 OpenMax 49.94 54.14 51.32 54.28 40.28 45.98 36.41 47.89 Softmax 57.88 58.32 62.52 58.10 46.17 50.78 42.52 51.83 LMCL 64.21 61.36 70.44 60.88 47.84 52.05 49.29 52.60 SEG 51.11 55.68 53.22 55.81 47.00 52.83 46.17 54.16 ADB 78.85 71.62 84.56 70.94 86.72 80.83 90.88 78.82 SCL+GDA 83.87 67.94 89.44 66.81 82.29 70.92 88.99 67.44 SCL+LOF 84.05 74.86 89.01 74.12 80.10 78.51 84.45 77.32 Ours 85.62 77.13 90.19 76.44 89.04 81.61 92.7 79.39 50% MSP 59.73 71.18 41.19 71.97 52.42 63.01 23.99 66.91 DOC 64.81 73.12 55.14 73.59 52.53 62.84 25.44 66.58 OpenMax 65.31 74.24 54.33 74.76 60.35 68.18 45.00 70.49 Softmax 67.44 74.19 60.28 74.56 65.96 71.94 56.80 73.45 LMCL 72.73 77.53 69.53 77.74 58.98 68.01 43.01 70.51 SEG 68.44 76.48 60.42 76.90 68.50 74.18 60.89 75.51 ADB 78.86 80.90 78.44 80.96 86.40 85.83 87.34 85.68 SCL+GDA 79.38 79.84 79.97 79.83 82.31 79.54 84.42 79.04 SCL+LOF 80.54 82.4 80.42 82.6 84.47 84.57 85.01 84.53 Ours 83.14 83.87 83.58 83.88 87.62 87.18 88.36 87.06 75% MSP 75.89 83.60 39.23 84.36 72.17 77.95 33.96 80.88 DOC 76.77 83.34 50.60 83.91 68.91 75.06 16.76 78.95 OpenMax 77.45 84.07 50.85 84.64 74.42 79.78 44.87 82.11 Softmax 78.20 84.31 56.90 84.78 77.41 82.28 54.07 84.11 LMCL 78.52 84.31 58.54 84.75 72.33 78.28 37.59 81.00 SEG 78.87 85.66 54.43 86.20 80.83 84.78 62.30 86.28 ADB 81.08 85.96 66.47 86.29 82.78 85.99 73.86 86.80 SCL+GDA 79.86 85.14 64.49 85.5 80.88 84.79 68.83 85.86 SCL+LOF 81.56 86.97 65.05 87.35 80.92 83.98 71.71 84.79 Ours 81.77 87.07 67.66 87.41 83.85 87.06 74.20 87.92 Table 1: Results of OOD classificaion with different IND classes rate (25%, 50% and 75%) on BANKING and StackOverflow.",
"with SCL Figure",
"4(b), our method further push apart samples from different classes, especially for the classes of whisper-mode(green) and can-cel(blue) .",
"This shows our method can ensure the classification quality of IND intents and better optimize empirical risk.",
"Can our method optimize open space risk better?",
"As shown in Figure",
"4(c) and Figure",
"4(d), we notice that the features of vaccines class (brown) can be clustered into three clusters (top-k=5) or two clusters (top-k=15), which means leave more space for OOD intent samples as expected.",
"To further verify the effectiveness of our method to characterize the difference between IND and OOD intents, we draw the LOF score distribution of samples in CLNIC-FULL test set.",
"We show results in Figure 5.",
"Compared with SCL (left) Figure",
"5(b), our LOF scores of OOD intent samples are spread in a larger range, which indicates that there is a larger margin around OOD intent samples according to the definition of LOF and further shows our method optimize the open space risk better.",
"And this larger margin makes it is easier for us to find a baseline (brown dotted line) in Figure",
"5(b) (our method) to separate IND and OOD samples, which indicates they are distinguished better.",
"The above results also ensure that we can detect OOD intents without making assumptions about the feature distribution.",
"In Section 3.3, we propose a novel method, which utilizes the k-nearest neighbors of IND intents to learn discriminative features, to reduce open space risk.",
"To investigate the effect of k, we compare the performance of the model in detecting OOD intents with different k values (in a certain range) during contrastive learning (fixing other hyper-parameters).",
"As shown in Figure 6, we have observed that the performance (cosine-based) of the model first increase and then decrease on four datasets as the value increases.",
"This phenomenon is as expected.",
"At the beginning of k growth, due to the reduction of open space risk, the risk of inter and intra OOD samples being identified as IND decreases (corresponding F1-OOD increases).",
"Later, due to the compression of IND semantic space, more and more intra OOD samples are identified as IND (corresponding F1-OOD decreases) and finally tend to be stable.",
"The phenomenon also shows that our method can better reduce the open space risk than the previous methods (which pull together all IND intents belonging to the same class).",
"While we want to minimize open space risk (help-ful to detect OOD intents), we also need to balance it against the empirical risk (ensure the classification quality of IND classification) over the training data.",
"In the objective (in",
"Eq.(7)) of OOD classification, we use the hyper-parameter to balance empirical and open space risk.",
"To analyze the effect of our introduced , we experiment our method (cosine-based) with different s in different datasets.",
"As shown in Figure 7, we find with the increment of , the model gradually reaches the best empirical-open trade-off, which means the model can ensure the classification quality of IND and OOD detection effectively.",
"The only empirical risk or open space risk can not make the model achieve better results.",
"In this paper, we explicit the optimization objective of OOD intent classification.",
"We analyze the limitation of existing methods and propose a simple yet effective method to learn discriminative semantic features.",
"Our approach pulls together k-nearest neighbors of IND intents and pushes apart them from different class samples to better reduce both empirical risk and open space risk.",
"Extensive experiments conducted on four challenging datasets show our approach achieves consistent improvements without restrictions on feature distribution.",
"We thank Demin Song for helpful discussion about experiment.",
"We would also like to thank Hang Yan and TianXiang Sun for helpful suggestion for this paper.",
"This work was supported by the National Key Research and Development Program of China (No. 2020AAA0108702) and National Natural Science Foundation of China (No. 62022027).",
"The datasets used in all experiments are derived from previously published scientific papers, and to our knowledge, there are no privacy or ethical issues."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"other",
"other",
"other",
"method"
] |
[
"Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks.",
"However, existing models require a large amount of parallel image-caption data for pre-training.",
"Such data are costly to collect and require cumbersome curation.",
"Inspired by unsupervised machine translation, we investigate if a strong V&L representation model can be learned through unsupervised pre-training without image-caption corpora.",
"In particular, we propose to conduct mask-and-predict pre-training on text-only and image-only corpora and introduce the object tags detected by an object recognition model as anchor points to bridge two modalities.",
"We find that such a simple approach achieves performance close to a model pre-trained with aligned data, on four English V&L benchmarks.",
"Our work challenges the widely held notion that aligned data is necessary for V&L pre-training, while significantly reducing the amount of supervision needed for V&L models.",
"Pre-trained contextual vision-and-language (V&L) models (Lu et al., 2019; Tan and Bansal, 2019; Li et al., 2019; Su et al., 2019; Chen et al., 2020c) have achieved high performance on various V&L tasks.",
"However, different from contextual language models, such as BERT (Devlin et al., 2019a), which are trained on easily-accessible unannotated text corpora, existing V&L models are still a step away from self-supervision.",
"They require a massive amount of aligned text-image pairs for mask-and-predict pre-training.",
"Such aligned data are costly to collect and hard to scale up.",
"For example, the widely used MS-COCO dataset (Chen et al., 2015) requires extensive The two authors contributed equally.",
"In this paper, we explore unsupervised V&L pre-training with unaligned image and text corpora.",
"2 This research direction aligns with the theme of unsupervised and self-supervised learning that moves from heavily-annotated data to unannotated data, e.g. unsupervised machine translation (Lam-ple et al., 2018) and unsupervised image captioning (Feng et al., 2019).",
"Unsupervised V&L pretraining is highly desirable as in many domains, aligned data is scarce (e.g. multimodal hate speech detection (Kiela et al., 2020) and the medical domain (Li et al., 2020c)) and it is easier to collect unaligned text and images.",
"In addition to its practical implication, our endeavour challenges the widely held notion that image-caption corpora is indispensable for pre-training (Lu et al., 2019) and brings valuable insight into the role that aligned data play in V&L pre-training.",
"We are inspired by works on multi-lingual contextual language models (Pires et al., 2019).",
"If we treat an image as a set of regions and each region as a visual token (Dosovitskiy et al., 2020), V&L models share a similar goal with multi-lingual models as they both learn shared representations across different domains.",
"Although a multi-lingual language model pre-trained on non-parallel corpora such as mBERT (Devlin et al., 2019b) cannot align or translate languages out-of-the-box, its representation spaces for different languages can be easily aligned with a linear probe (Conneau et al., 2020).",
"This property suggests the existence of universal latent symmetries in the unaligned contextual embedding spaces and is believed to contribute to 1 Other datasets also require cumbersome curation.",
"For example, while Conceptual Captions is crawled from the web, the authors report that from 5 billion images gathered over the Internet, only 3 million have paired high-quality captions after filtering (Sharma et al., 2018; Changpinyo et al., 2021).",
"2 Following Lample et al. (2018) and Feng et al. (2019), we use the term unsupervised to refer to pre-training with unaligned data, while supervised refers to pre-training with aligned text and images.",
"mBERT's cross-lingual transfer ability.",
"Thus we hypothesize that strong V&L representations can be similarly learned by mask-and-predict pretraining on unaligned language and vision data.",
"We propose unsupervised V&L pre-training with unaligned text and images (see an illustration in Figure 1).",
"Specifically, we take VisualBERT (Li et al., 2019) as a running example and apply unsupervised pre-training, resulting in Unsupervised VisualBERT (U-VisualBERT).",
"The model takes the form of a single Transformer that can accept inputs from both modalities.",
"During each step of pre-training, unlike the existing models that observe a batch of text-image pairs, our model observes either a batch of text segments or a batch of images.",
"When provided with text, part of the text is masked and the model is trained to predict the masked words; when provided with an image, part of the image regions are masked and the model is trained to predict properties of the masked regions.",
"To further encourage cross-modal fusion, we leverage the tags from an object detector as an-chor points (Li et al., 2020b).",
"For every object, we append its detected tag as a word to the visual input.",
"The mask-and-predict objective is applied to the tags.",
"For instance, for the image in Figure 1, the model can observe cake appears naturally as a word, a tag, and an image region.",
"The direct typing of image regions and words can be learned and serves as a starting point for further alignment.",
"The function of the detector tags resembles that of the overlapping vocabulary in multi-lingual language models, i.e., identical strings that appear in different languages with the same meanings (e.g., DNA appears in both English and French).",
"As the overlapping vocabulary improves cross-lingual transfer (Wu and Dredze, 2019), we argue the detector tags can improve cross-modal grounding.",
"We first conduct controlled experiments by pretraining on an English image-caption corpus without providing the alignment, following unsupervised machine translation and image captioning (Gu et al., 2019).",
"Results on four English V&L benchmarks (VQA (Goyal et al., 2017), NLVR 2 (Suhr et al., 2019), Flickr30K Image Retrieval (Plummer et al., 2015), and RefCOCO+ (Yu et al., 2016)) show that U-VisualBERT achieves comparable performance as models with access to text-image pairs (Section 4).",
"Additionally, our approach is effective in practical settings,",
"1) when using independently collected images and captions and",
"2) when using images and general-domain text (BookCorpus (Zhu et al., 2015)) without any captions (Section 5.1).",
"Quantitative and qualitative analysis confirms the anchoring effect of the detector tags (Section 5.2).",
"As a byproduct, we conduct preliminary experiments to show the promise of the approach in a semi-supervised setting, where a hybrid model pre-trained with both aligned and additional unaligned data surpasses a model pre-trained only on aligned data.",
"(Section 6).",
"The above experiments demonstrate the wide applicability of our method.",
"We will open-source the project under https://github.com/uclanlp/visualbert .",
"Pre-trained V&L Transformers Various V&L models that are pre-trained with a mask-and-predict objective on aligned text-image data have been proposed (Lu et al., 2019; Tan and Bansal, 2019; Li et al., 2019; Su et al., 2019; Chen et al., 2020c; Li et al., 2020a; Zhou et al., 2020; Huang et al., 2020; Yu et al., 2020; Gan et al., 2020).",
"Two kinds of designs have been proposed.",
"Two-stream models (Lu et al., 2019; Tan and Bansal, 2019; Yu et al., 2020) utilize separate Transformers (Vaswani et al., 2017) for each modality and a cross-modality module is adopted.",
"Single-stream models (Li et al., 2019; Su et al., 2019; Chen et al., 2020c) directly input the text and visual embeddings into one single Transformer.",
"They have been widely used by downstream tasks (Kiela et al., 2020).",
"Probing tasks (Cao et al., 2020) confirm that they capture useful V&L information after pre-training.",
"Two studies also try to incorporate tag information during pre-training.",
"Oscar (Li et al., 2020b) adds detected tags as additional signals when pretraining with aligned data.",
"We, however, do so for pre-training with unaligned data and show that the tags serve a more important role in unsupervised pre-training (Section 5.2).",
"VIVO (Hu et al., 2020) targets novel object captioning.",
"They use manually annotated image-tag data for pre-training and image-caption data for fine-tuning.",
"We do not use manually annotated data and the tags are noisily generated by a detector.",
"Self-supervised Representation Learning Self-supervision involves creating supervision objectives from natural data, often by corrupting the input and training the model to reconstruct the input (Kolesnikov et al., 2019) or contrastive learning (Chen et al., 2020b).",
"Self-supervised training on language (Peters et al., 2018; Devlin et al., 2019a) such as BERT has been proven useful for various NLP tasks (Liu et al., 2019), while self-supervised visual representation learning has been centered around learning low-level visual features, in hope of enhancing the backbone CNN (Doersch et al., 2015; Pathak et al., 2016; Noroozi and Favaro, 2016; Chen et al., 2020b).",
"In this paper, we conduct V&L pre-training by optimizing a reconstructive objective on unlabeled language-only and image-only data.",
"Thus, our proposed model could be regarded as self-supervised.",
"Notably, our contextual visual representation is built on top of a pre-trained detector, operating at a level above local visual features.",
"Unsupervised Multi-lingual Language Model This work is inspired by multi-lingual representations trained without parallel corpora (Devlin et al., 2019b).",
"They are effective for cross-lingual transfer, which involves learning a model in one language and applying it to another with no additional training.",
"Studies (Wu and Dredze, 2019; Conneau et al., 2020) have confirmed several design choices that facilitate such transfer, e.g. shared parameters and overlapping vocabularies across languages, and we make similar design choices in U-VisualBERT (Section 3.2).",
"We argue that multi-lingual representations bear resemblance to multi-modal representations as both seek to encode the alignment between two domains (Chen et al., 2020a).",
"Unsupervised Grounding Learning Prior works have explored learning grounding with weak or no supervision (Rohrbach et al., 2016; Xiao et al., 2017; Wang et al., 2020).",
"Closest to this paper is unsupervised image captioning (Feng et al., 2019; Laina et al., 2019; Gu et al., 2019), which conducts image captioning with unpaired images and captions.",
"Similar to this work, the detector tags serve as the anchor points for image captioning.",
"However, unsupervised image captioning still requires captions, while our approach works with easy-to-collect general-domain text without any caption text (Section 5.1).",
"We first take Supervised VisualBERT (S-VisualBERT) as an example and illustrate how a typical V&L model is pre-trained with aligned data.",
"Then we introduce unsupervised V&L pre-training, and the resulting model Unsupervised VisualBERT (U-VisualBERT).",
"As mentioned in Section 2, there are several V&L representation learning methods based on BERT.",
"We take Supervised VisualBERT (S-VisualBERT) as an example, which will also be used as a baseline in the experiments.",
"S-VisualBERT is modi-fied from the original VisualBERT (Li et al., 2019) and augmented with the visual objectives from LXMERT (Tan and Bansal, 2019) and detector tags similar to Oscar (Li et al., 2020b) (discussed in detail in Section 3.2).",
"Every input to S-VisualBERT contains a text segment T and an image I .",
"The text and the image are first mapped into embedding vectors respectively.",
"Text embeddings T is a matrix in which each column vector represents the embedding of a subword in the text sequence, i.e. T = [ w 1: n ] .",
"Following BERT, each subword embedding w i is the sum of its token, position, and segment embedding.",
"Image embeddings I include both the image region embeddings r 1: m and the detector tag embeddings d 1: l (see Section 3.2 for details).",
"Each region embedding r i is the sum of a visual feature vector from the detector and a spatial box coordinate embedding (Tan and Bansal, 2019).",
"The text and visual embeddings are then passed through a Transformer to built contextual representations.",
"The model is pre-trained with a mask-and-predict objective.",
"Given a text-image pair [ T, I ] from the aligned dataset D , we randomly mask out some words w i , some regions r j , and some tags d k to obtain masked [ T, I ] .",
"The model is trained to predict the masked words, the properties of the masked regions, and the masked tags given [ T, I ] .",
"The pre-training objective can be summarized as: min (cid:88) [ T,I ] DLT + I + M (cid:16) f ([ T, I ]) , [ T, I ] (cid:17) .",
"f represents the embedding layer and the multilayer Transformer.",
"LT + I + M is the sum of",
"1) the masked language model loss LT , 2) the image reconstruction loss LI , and",
"3) an text-image match objective LM .",
"Specifically, LI includes a tag reconstruction loss L tagI (more details in Section 3.2) and the two visual losses as in LXMERT (Tan and Bansal, 2019): the region feature regression loss L refI , which forces the model to regress to the visual vector, and the noisy label classification loss L clsI , which predicts the detected labels of masked objects with the cross-entropy loss.",
"With a probability of 0.5, we provide the model with a mismatched text-image pair instead of a matched pair, and LM asks the model to predict whether the image matches the text.",
"After the model is pre-trained, it can be fine-tuned for V&L tasks similar to how BERT is fine-tuned for NLP tasks.",
"We introduce the two core design choices of unsupervised pre-training: mask-and-predict pretraining with unaligned data and the detector tags.",
"Data We assume access to a text corpus DT and an image corpus DI for pre-training.",
"During every pre-training step, we randomly sample either a batch of text from DT or a batch of images from DI .",
"No alignment between text and images is provided to the model.",
"When pre-training with a text segment T , the model is trained to reconstruct T given the masked T .",
"3 When pre-training with an image I , the model is trained to reconstruct I given the masked I .",
"A single Transformer is used throughout two modalities (i.e. shared across modalities).",
"The pre-training objective can be summarized as: min (cid:88) T DTLT ( f ( T ) , T ) + (cid:88) I DILI ( f ( I ) , I ) .",
"After pre-training, the model is fine-tuned on downstream tasks just as its supervised counterpart, with the input being a text-image pair.",
"Detector Tags While mask-and-predict pretraining with unaligned data in itself achieves nontrivial performance (Section 5.2), we find it beneficial to provide noisy alignment signals in the form of the detector tags.",
"When modeling an image I , for each region detected, we append the tag outputted by the object detector to the input.",
"The detector (Ren et al., 2015) is pre-trained on a general object detection dataset (Krishna et al., 2017; Anderson et al., 2018) and the tags are essentially a bag of words that provide some noisy grounding signals to the model.",
"During pre-training, we apply the mask-and-predict objective to the tags, which further encourages grounding.",
"We process the detector tags as a subword sequence d 1: l with spatial coordinates.",
"4 Every tag subword is embedded as the sum of its token embedding and a spatial coordinate embedding.",
"The token embedding is the same as the token embedding used in text modeling, while the spatial coordinate embedding is the same as the coordinate embedding of the corresponding region.",
"The coordinate embedding allows the model to distinguish tags from different regions.",
"5 With the de-3 We adopt the next sentence prediction task in BERT when long documents are available.",
"4 Each tag corresponds to a region.",
"A tag could be split into multiple subwords, so the total length of the tag subword sequence l is equal to or larger than the number of regions m .",
"5 This design differs from that of Oscar (Li et al., 2020b).",
"Oscar does not add the coordinate embeddings to tags to encourage the fusion of tag and visual representations.",
"tector tags added, the image I is embedded as a sequence of image region features r 1: m followed by a sequence of detector tag embeddings d 1: l , i.e. I = [ r 1: m ; d 1: l ] .",
"The tags are added during both pre-training and fine-tuning.",
"Further, during pretraining, certain tag subwords are masked and the tag reconstruction loss L tagI supervises the model to predict the masked tags.",
"The tags are predicted just as masked subwords are predicted in text modeling.",
"The prediction softmax layer is shared between the tag and text subwords.",
"The parameters involved in modeling tags include the token embedding, the coordinate embedding, and the subword softmax embedding.",
"These embedding parameters are shared across modalities and encourage the model to project text, visual, and tag representations into the same space (see Section 5.2 for an example).",
"This resembles the design in multi-lingual language models, which use shared BPE embeddings and softmax weights across languages (Wu and Dredze, 2019).",
"As the domain and quality of data may affect the model performance, the conventional practice in unsupervised learning is to use aligned corpora without providing alignments, allowing for controlled comparison with a supervised model.",
"For example, unsupervised machine translation creates unaligned corpora by splitting up parallel corpora (Lample et al., 2018) while unsupervised image captioning (Gu et al., 2019) create unaligned corpus by shuffling images and captions from MSCOCO (Chen et al., 2015).",
"Following prior work, we first conduct experiments by using Conceptual Captions (CC) (Sharma et al., 2018) as the source of images and text for both the supervised and unsupervised model.",
"Later in Section 5.1, we show that our method is effective when the images and captions are collected independently and when no caption text is used.",
"U-VisualBERT The model is pre-trained with shuffled captions and images.",
"At each training step, we sample either a batch of images or a batch of text.",
"Following VL-BERT (Su et al., 2019), we find it beneficial to include BookCorpus (Zhu et al., 2015), a general-domain text corpus, during pretraining.",
"In sum, U-VisualBERT is trained on 3M images from CC, 3M captions from CC, and 2.5M text segments from BookCorpus 6 .",
"S-VisualBERT We introduce a Supervised VisualBERT (S-VisualBERT) trained with aligned data as introduced in Section 3.1.",
"S-VisualBERT is pretrained on 3M caption-image pairs from CC and 2.5M text segments from BookCorpus.",
"Compared Models Additionally, we list the performance of a Base VisualBERT that is initialized from BERT and does not undergo further pre-training.",
"Previously reported supervised models that are trained on CC are also listed, including ViLBERT , VL-BERT , and UNITER .",
"For UNITER, we include the version that is trained only on CC (UNITER cc ) 7 .",
"Although their network architectures differ from ours and cannot be directly compared, they jointly paint the picture of the performance we should expect by pre-training on CC.",
"Models developed before BERT are listed as Pre-BERT (Gao et al. (2019) for VQA, Suhr et al. (2019) for NLVR 2 , Lee et al. (2018) for Flickr30K, and Yu et al. (2018) for RefCOCO+).",
"Setup For all the VisualBERT variants introduced in the paper, we initialize them from BERT base and pre-train for 10 epochs on their respective pre-training datasets with a batch size of 144.",
"All models can be trained within 3 days on 4 V100s each with 16GB of memory.",
"We use the Adam optimizer (Kingma and Ba, 2015) with a linear-decayed learning-rate schedule (Devlin et al., 2019a) and a peak learning rate at 6 10 5 .",
"We conduct evaluations by fine-tuning on four downstream tasks: Visual Question Answering (VQA 2.0) (Goyal et al., 2017), Natural Language for Visual Reasoning (NLVR 2 ) (Suhr et al., 2019), Image Retrieval (Flickr 30K) (Plummer et al., 2015), and Referring Expression (RefCOCO+) (Yu et al., 2016).",
"We use a Faster R-CNN pre-trained on the Visual Genome dataset to extract region features (Anderson et al., 2018).",
"For each task, we follow the recommended setting in previous works.",
"For details, please refer to the appendix.",
"Results Table 1 summarizes the results.",
"For each model, we list the type and amount of data used 6 Our version of BookCorpus contains around 5M text segments with 64 words per segment.",
"For computational reasons, we downsample the dataset such that during each epoch, the model observes only half of the text segments from BookCorpus.",
"This downsampling is also done for the other VisualBERT variants.",
"during pre-training.",
"8 To control for randomness, we report the means and standard deviations of U-VisualBERT and S-VisualBERT across three runs.",
"U-VisualBERT outperforms the Base model on all benchmarks, while only lagging behind S-VisualBERT slightly on VQA, NLVR 2 , and Ref-COCO+.",
"U-VisualBERT even surpasses or rivals with some supervised models (e.g., ViLBERT on VQA and RefCOCO+, VL-BERT on RefCOCO+, and UNITER cc on RefCOCO+).",
"This shows that a model through unsupervised pre-training can perform comparably with supervised models.",
"On Flickr30K Image Retrieval, the difference between U-VisualBERT and S-VisualBERT is more evident.",
"The task focuses on identifying if an image and a text segment are coherent.",
"S-VisualBERT is provided with explicit signals for such a task with the text-image match objective LM during pre-training (Section 3.1).",
"While U-VisualBERT is not provided with such explicit signals, it still performs better than the Base model.",
"Further, if we were to remove the explicit signal (i.e. the text-image match objective) when pre-training on aligned data, S-VisualBERT without LM achieves only 57.98 on R@1, much closer to U-VisualBERT 8 For models initialized from BERT, we do not count the BERT pre-training data.",
"VL-BERT uses both BookCorpus and Wikipedia during V&L pre-training.",
"We estimate that the two corpora roughly have 5OM segments with 64 words per segment.",
"With a different pre-processing style (e.g. longer segments), the number of segments may change.",
"In this section, we analyze the effect of the text data and the role of the detector tags.",
"The assumption behind unsupervised pre-training is that the detector tags should appear both in the images and text corpus, serving as the grounding anchor points.",
"When the images and captions come from the same corpus, such an assumption clearly holds, and unsupervised pre-training works well (Section 4).",
"However, we are curious if such an assumption still holds",
"1) if images and captions come from independently collected corpora (U-VisualBERT SBU ) and",
"2) if no caption text but general-domain text is provided (U-VisualBERT NC ).",
"The latter setting bears great practical value.",
"Conceptually, collecting caption-style text could be as hard as collecting image-caption data as images and captions seldom appear separately.",
"It is desirable to explore training V&L representations without caption-style text.",
"Thus we experiment pre-training with general-domain text, which could be easier to collect.",
"U-VisualBERT SBU We use 3M images from CC and 1M captions from SBU captions (Ordonez et al., 2011).",
"To compensate for the different amounts of text between CC and SBU, we upsam-Model VQA NLVR 2 Flickr30K RefCOCO+ Test-Dev Dev Test-P R@1 R@5 R@10 Dev TestA TestB Base NT 69.06 51.98 52.73 48.40 78.20 87.18 70.15 76.91 61.72 U-VisualBERT NT 69.87 67.90 68.92 50.56 80.22 88.32 71.94 77.79 62.38 U-VisualBERT 70.74 71.74 71.02 55.37 82.93 89.84 72.42 79.11 64.19 S-VisualBERT NT 70.49 72.56 73.53 60.26 85.58 91.64 72.70 77.93 62.99 S-VisualBERT 70.87 73.44 73.93 61.19 86.32 91.90 73.65 79.48 64.49 H-VisualBERT 71.05 .02 73.80 .26 74.82 .25 60.28 .60 86.30 .35 92.06 .28 74.01 .25 80.18 .23 64.89 .24 Table 3: Detector tags show a larger impact in the unsupervised setting (U-VisualBERT NT vs. U-VisualBERT) than in the supervised setting (S-VisualBERT NT vs. S-VisualBERT).",
"ple the BookCorpus so that the amount of text data used by U-VisualBERT SBU is roughly the same as U-VisualBERT.",
"U-VisualBERT NC The model is trained on images from CC and text from BookCorpus, a general-domain corpus.",
"Results Unsupervised pre-training is effective in both scenarios (Table 1).",
"When pre-training images and text are collected independently, U-VisualBERT SBU achieves similar performance as U-VisualBERT, with the latter higher on VQA, and the former higher on the other three tasks.",
"When no caption text is used, the performance on NLVR 2 and RefCOCO+ remains unaffected while the performance on VQA and Flickr30K drops slightly, potentially because the language style of VQA and Flickr30K is similar to captions, benefiting U-VisualBERT.",
"Such results are not surprising.",
"In general-domain corpora like Wikipedia, grounded words take up a decent portion ( > 25 % ) (Tan and Bansal, 2020).",
"Thus the tags appear in pretraining text corpora with a non-trivial frequency and U-VisualBERT NC learns from such signals.",
"The above results suggest the applicability of unsupervised pre-training to many language-only and image-only datasets, which are easier to collect than image-caption datasets (Trinh and Le, 2018; Sun et al., 2017).",
"W-VisualBERT NT U-VisualBERT NT observes no tags and only dense region features for image embeddings during pre-training and fine-tuning.",
"For comparison, a base model without tags is introduced (Base NT ), which is initialized from BERT and does undergo further pre-training.",
"S-VisualBERT NT To study the effect of the detector tags when aligned data are present, we introduce S-VisualBERT NT which is trained on aligned data but observes no tags for image embeddings.",
"Result We first find that even without tags, unsupervised pre-training benefits downstream tasks (Table 3).",
"U-VisualBERT NT outperforms Base NT on all metrics with a large margin.",
"We attribute this to the (unaligned) contextual V&L representation learned through pre-training.",
"This bears resemblance to the observation in multi-lingual language models that the shared vocabulary across languages (i.e. anchor points) is not necessary for cross-lingual transfer (Conneau et al., 2020).",
"Further, while the detector tags are beneficial for both supervised and unsupervised pre-training, the performance improvement is more evident for the latter.",
"For example, performance difference on VQA between U-VisualBERT and U-VisualBERT NT is 0.95 (70.82 vs. 69.87) while the difference between S-VisualBERT and S-VisualBERT NT is 0.41 (70.90 vs. 70.49).",
"The results are expected.",
"When aligned data are present, object tags serve as additional signals while in unsupervised pre-training, they serve as the only source from which grounding is learned.",
"Visualization To gain a direct sense of how the detector tags help bridge the modalities, we visualize the contextual representation spaces of S-VisualBERT, U-VisualBERT, and U-VisualBERT NT in Figure 2.",
"For each of the most frequent 15 object classes in the COCO dataset (Chen et al., 2015), we randomly sample at most 50 instances and take the last-layer contextual representations of the words, the objects, and the tags (when available) and visualize them with t-SNE bottle bowl cup car truck motorcycle S-VisualBERT U-VisualBERT motorcycle truck car bottle bowl cup U-VisualBERT NT bottle bottle cup bowl truck car motorcycle motorcycle truck car cup bowl Figure 2: Visualization of the contextual representations of S-VisualBERT, U-VisualBERT, and U-VisualBERT NT .",
"representations of six selected classes.",
"Though trained without aligned data, U-VisualBERT can group text, tag, and visual representations by their semantic classes.",
"Similar phenomena can be observed in S-VisualBERT.",
"U-VisualBERT NT , lacking any signal to align the two spaces, does not show signs of such behaviour.",
"In U-VisualBERT NT , text and visual representations are almost completely separated (e.g., the two disjoint red rectangles in the figure on the right).",
"However, some common structures emerge in both modalities.",
"For instance, representations for car, truck, and motorcycle, the three semantically-related classes, are close to each other, in both the textual and visual modality (the red rectangles); representations for cup, bottle, and bowl are close (the blue rectangles).",
"This also holds for the other two models and resembles what is observed in Li et al. (2020b) and Ilharco et al. (2020).",
"Unsupervised pre-training in itself has great practical and research value in many domains where aligned data is scarce.",
"As a byproduct, we won-der if the approach could find its use in a semi-supervised setting, where we pre-train a model with both aligned data and unaligned data.",
"H-VisualBERT We introduce a hybrid model that is trained on the 3M aligned data from Conceptual Captions (CC) and additional unaligned 1.7M images from Open Images (OI) (Kuznetsova et al., 2020).",
"When a training sample comes from CC, we provide the model with a text-image pair, and when the training sample comes from OI, we provide only the image.",
"We do not use any manually annotated visual labels provided in OI.",
"Result We control for randomness by running H-VisualBERT for three times and report the means and stand deviations.",
"We observe that H-VisualBERT brings consistent improvement upon S-VisualBERT on most tasks (Table",
"3) except Flickr30K 9 .",
"This preliminary result is promising as the dataset scale in this experiment is relatively small (million-scale).",
"Meanwhile, unannotated data generally could not improve upon a model trained with annotated data significantly, unless drastically scaled up (He et al., 2020).",
"We leave large-scale experiments to future work.",
"In this paper, we explore unsupervised pre-training with unaligned data.",
"We conduct mask-and-predict pre-training on textual data and visual data and the detector tags are used as anchor points to bridge the two modalities.",
"Experiments show that unsupervised pre-training can achieve performance similar to supervised pre-training.",
"9 On Flickr30K, the performance between H-VisualBERT and S-VisualBERT is similar, potentially because the imagetext match objective is the dominant contributor and additional image-only data during pre-training have limited benefit (Section 4).",
"One caveat of the proposed method is that data collected from the web may contain biases (Zhao et al., 2017), toxic contents (Schmidt and Wie-gand, 2017), and other ethical issues.",
"This problem is common to ML models and we stress that de-biasing (Zhao et al., 2019) and a rigorous examination are needed before deploying the system.",
"We would like to thank Hao Tan, members of UCLA NLP, and members of UCLA PlusLab for their helpful comments.",
"We also thank the reviewers for the valuable reviews.",
"This work was supported in part by DARPA MCS program under Cooperative Agreement N66001-19-2-4032.",
"The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"objective",
"objective",
"method",
"method",
"other",
"method",
"method",
"other",
"method",
"abstain",
"method",
"method",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Leveraging user-provided translation to constrain NMT has practical significance.",
"Existing methods can be classified into two main categories, namely the use of placeholder tags for lexicon words and the use of hard constraints during decoding.",
"Both methods can hurt translation fidelity for various reasons.",
"We investigate a data augmentation method, making code-switched training data by replacing source phrases with their target translations.",
"Our method does not change the NMT model or decoding algorithm, allowing the model to learn lexicon translations by copying source-side target words.",
"Extensive experiments show that our method achieves consistent improvements over existing approaches, improving translation of constrained words without hurting unconstrained words.",
"One important research question in domain-specific machine translation (Luong and Manning, 2015) is how to impose translation constraints (Crego et al., 2016; Hokamp and Liu, 2017; Post and Vilar, 2018).",
"As shown in Figure 1",
"(a), the word breadboard can be translated into (a wooden board that is used to cut bread on) in the food domain, but (a construction base for prototyping of electronics) in the electronic domain.",
"To enhance translation quality, a lexicon can be leveraged for domain-specific or user-provided words (Arthur et al., 2016; Hasler et al., 2018).",
"We investigate the method of leveraging pre-specified translation for NMT using such a lexicon.",
"For leveraging pre-specified translation, one existing approach uses placeholder tags to substitute named entities (Crego et al., 2016; Li et al., 2016; Wang et al., 2017b) or rare words (Luong et al., Input: I want a breadboard Output: (cid:1) (cid:5)(cid:4) (cid:2)(cid:3) (cid:10)(cid:7)(cid:9)(cid:11) Constrained: (cid:1) (cid:5)(cid:4) (cid:2)(cid:3) (cid:6)(cid:8)(cid:11) I (cid:6)(cid:8)(cid:11) breadboard user-provided or domain-specific dictionary: Input: I want a breadboard Code-switched: I want a (cid:6)(cid:8)(cid:11) Output: (cid:1) (cid:5)(cid:4) (cid:2)(cid:3) (cid:6)(cid:8)(cid:11)",
"2014) on both the source and target sides during training, so that a model can translate such words by learning to translate placeholder tags.",
"For example, the i -th named entity in the source sentence is replaced with tag i , as well as its corresponding translation in the target side.",
"Placeholder tags in the output are replaced with pre-specified translation as a post-processing step.",
"One disadvantage of this approach, however, is that the meaning of the original words in the pre-specified translation is not fully retained, which can be harmful to both adequacy and fluency of the output.",
"Another approach (Hokamp and Liu, 2017; Post and Vilar, 2018) imposes pre-specified translation via lexical constraints , making sure such constraints are satisfied by modifying NMT decoding.",
"This method ensures that pre-specified translations appear in the output.",
"A problem of this method is that it does not explicitly explore the correlation between pre-specified translations and their corresponding source words during decoding, and thus can hurt translation fidelity (Hasler et al., 2018).",
"There is not a mechanism that allows the model to learn constraint translations during training, which the placeholder method allows.",
"We investigate a novel method based on data augmentation , which combines the advantages of both methods above.",
"The idea is to construct synthetic parallel sentences from the original parallel training data.",
"The synthetic sentence pairs resemble code-switched source sentences and their translations, where certain source words are replaced with their corresponding target translations.",
"The motivation is to make the model learn to translate embedded pre-specified translations by copying them from the modified source.",
"During decoding, the source is similarly modified as a preprocessing step.",
"As shown in Figure 1",
"(b), translation is executed over the code-switched source, without further constraints or post-processing.",
"In contrast to the placeholder method, our method keeps lexical semantic information (i.e. target words v.s. placeholder tags) in the source, which can lead to more adequate translations.",
"Compared with the lexical constraint method, pre-specified translation is learned because such information is available both in training and decoding.",
"As a data augmentation method, it can be used on any NMT architecture.",
"In addition, our method enables the model to translate code-switched source sentences, and preserve its strength in translating un-replaced sentences.",
"To further strengthen copying, we propose two model-level adjustments: First, we share target-side embeddings with source-side target words, so that target vocabulary words have a unique embedding in the NMT system.",
"Second, we integrate pointer network (Vinyals et al., 2015; Gulcehre et al., 2016; Gu et al., 2016; See et al., 2017) into the decoder.",
"The copy mechanism was firstly proposed to copy source words.",
"In our method, it is further used to copy source-side target words.",
"Results on large scale English-to-Russian (En-Ru) and Chinese-to-English (Ch-En) tasks show that our method outperforms both placeholder and lexical constraint methods over a state-of-the-art Transformer (Vaswani et al., 2017) model on various test sets across different domains.",
"We also show that shared embedding and pointer network can lead to more successful applications of the copying mechanism.",
"We release four high-quality En-Ru e-commerce test sets translated by Russian language experts, totalling 7169 sentences with an average length of 21 1 .",
"Using placeholders.",
"Luong et al. (2014) use annotated unk tags to present the unk symbols in 1 To best of our knowledge, this is the first public e-commerce test set.",
"training corpora, where the correspondence between source and target unk symbols are obtained from word alignment (Brown et al., 1993).",
"Output unk tags are replaced through a post-processing stage by looking up a pre-specified dictionary or copying the corresponding source word.",
"Crego et al. (2016) extended unk tags symbol to specific symbols that can present name entities.",
"Wang et al. (2017b) and Li et al. (2016) use a similar method.",
"This method is limited when constrain NMT with pre-specified translations consisting of more general words, due to the loss of word meaning when representing them with placeholder tags.",
"In contrast to their work, word meaning is fully kept in modified source in our work.",
"Lexical constraints.",
"Hokamp and Liu (2017) propose an altered beam search algorithm, namely grid beam search, which takes target-side pre-specified translations as lexical constraints during beam search.",
"A potential problem of this method is that translation fidelity is not specifically considered, since there is no indication of a matching source of each pre-specific translation.",
"In addition, decoding speed is significantly reduced (Post and Vilar, 2018).",
"Hasler et al. (2018) use alignment to gain target-side constraints' corresponding source words, simultaneously use finite-state machines and multi-stack (Anderson et al., 2016) decoding to guide beam search.",
"Post and Vilar (2018) give a fast version of Hokamp and Liu (2017), which limits the decoding complexity linearly by altering the beam search algorithm through dynamic beam allocation.",
"In contrast to their methods, our method does not make changes to the decoder, and therefore decoding speed remains unchanged.",
"Translation fidelity of pre-specified source words is achieved through a combination of training and decoding procedure, where replaced source-side words still contain their target-side meaning.",
"As a soft method of inserting pre-specified translation, our method does not guarantee that all lexical constraints are satisfied during decoding, but has better overall translation quality compared to their method.",
"Using probabilistic lexicons.",
"Aiming at making use of one-to-many phrasal translations, the following work is remotely related to our work.",
"Tang et al. (2016) use a phrase memory to provide extra information for their NMT encoder, dynamically switching between word generation and phrase generation during decoding.",
"Wang et al. (2017a) use SMT to recommend prediction for NMT, which contains not only translation operations of a SMT phrase table, but also alignment information and coverage information.",
"Arthur et al. (2016) incorporate discrete lexicons by converting lexicon probabilities into predictive probabilities and linearly interpolating them with NMT probability distributions.",
"Our method is similar in the sense that external translations of source phrases are leveraged.",
"However, their tasks are different.",
"In particular, these methods regard one-to-many translation lexicons as a suggestion .",
"In contrast, our task aims to constrain NMT translation through one-to-one pre-specified translations.",
"Lexical translations can be used to generate code-switched source sentences during training, but we do not modify NMT models by integrating translation lexicons.",
"In addition, our data augmentation method is more flexible, because it is model-free.",
"Alkhouli et al. (2018) simulate a dictionary-guided translation task to evaluate NMT's alignment extraction.",
"A one-to-one word translation dictionary is used to guide NMT decoding.",
"In their method, a dictionary entry is limited to only one word on both the source and target sides.",
"In addition, a pre-specified translation can come into effect only if the corresponding source-side word is successfully aligned during decoding.",
"On translating named entities, Currey et al. (2017) augment the training data by copying target-side sentences to the source-side, resulting in augmented training corpora where the source and the target sides contain identical sentences.",
"The augmented data is shown to improve translation performance, especially for proper nouns and other words that are identical in the source and target languages.",
"Our method is based on data augmentation .",
"During training, augmented data are generated by replacing source words or phrases directly with their corresponding target translations.",
"The motivation is to sample as many code-switched translation pairs as possible.",
"During decoding, given pre-specified translations, the source sentence is modified by replacing phrases with their pre-specified translations, so that the trained model can directly copy embedded target translations in the output.",
"Given a bilingual training corpus, we sample augmented sentence pairs by leveraging a SMT phrase table, which can be trained over the same bilingual corpus or a different large corpus.",
"We extract source-target phrase pairs 2 from the phrase table, replacing source-side phrases of source sentences using the following sampling steps: 1. Indexing between source-target phrase pairs and training sentences:",
"(a) For each source-target phrase pair, we record all the matching bilingual sentences that contain both the source and target.",
"Word alignment can be used to ensure the phrase pairs that are mutual translation.",
"(b) We also sample bilingual sentences that match two source-target phrase pairs.",
"In particular, given a combination of two phrase pairs, we index bilingual sentences that match both simultaneously.",
"2. Sampling:",
"(a) For each source-target phrase pair, we keep at most k 1 randomly selected matching sentences.",
"The source-side phrase is replaced with its target-side translation.",
"(b) For each combination of two source-target phrase pairs, we randomly sample at most k 2 matching sentences.",
"Both source-side matching phrases are replaced with their target translations.",
"3 The sampled training data is added to the original training data to form a final set of training sentences.",
"We impose target-side pre-specified translations to the source by replacing source phrases with their translations.",
"Lexicons are defined in the form of one-to-one source-target phrase pairs.",
"Different from training, the number of replaced phrases in a source sentence is not necessarily restricted to one or two, which will be discussed in Section 5.5.",
"In practice, pre-specified translations can be provided by customers or through user feedback, which contains one identified translation for specified source segment.",
"Transformer (Vaswani et al., 2017) uses self-attention network for both encoding and decod-2",
"decod-2 Source-side phrase is at most trigram.",
"3 We set k 1 = 100 , k 2 = 30 empirically.",
"ing. The encoder is composed of n stacked neural layers.",
"For time step i in layer j , the hidden state h i,j is calculated by employing self-attention over the hidden states in layer j 1 , which are { h 1 ,j 1 , h 2 ,j 1 , ..., h m,j 1 } , where m is the number of source-side words.",
"In particular, h i,j is calculated as follows: First, a self-attention sub-layer is employed to encode the context.",
"Then attention weights are computed as scaled dot product between the current query h i,j 1 and all keys { h 1 ,j 1 , h 2 ,j 1 , ..., h m,j 1 } , normalized with a softmax function.",
"After that, the context vector is represented as weighted sum of the values projected from hidden states in the previous layer, which are { h 1 ,j 1 , h 2 ,j 1 , ..., h m,j 1 } .",
"The hidden state in the previous layer and the context vector are then connected by residual connection, followed by a layer normalization function (Ba et al., 2016), to produce a candidate hidden state h i,j .",
"Finally, another sub-layer including a feed-forward network (FFN) layer, followed by another residual connection and layer normalization, are used to obtain the hidden state h i,j .",
"In consideration of translation quality, multihead attention is used instead of single-head attention as mentioned above, positional encoding is also used to compensate the missing of position information in this model.",
"The decoder is also composed of n stacked layers.",
"For time step t in layer j , a self-attention sub-layer of hidden state s t,j is calculated by employing self-attention mechanism over hidden states in previous target layer, which are { s 1 ,j 1 , s 2 ,j 1 , ..., s t 1 ,j 1 } , resulting in candidate hidden state s t,j .",
"Then, a second target-to-source sub-layer of hidden state s t,j is inserted above the target self-attention sub-layer.",
"In particular, the queries( Q ) are projected from s t,j , and the keys( K ) and values( V ) are projected from the source hidden states in the last layer of encoder, which are { h 1 ,n , h 2 ,n , ..., h m,n } .",
"The output state is another candidate hidden state s t,j .",
"Finally, a last feed-forward sub-layer of hidden state s t,j is calculated by employing self-attention over s t,j .",
"A softmax layer based on decoder's last layer s t,n is used to gain a probability distribution P predict over target-side vocabulary.",
"p ( y t | y 1 , ..., y t 1 , x ) = softmax( s t,n W ) , (1) where W is the weight matrix which is learned, x i want h 1,1 h 2,1 h 3,1 Encoder Layer n [ ][ ] [ ] s 4, n [ ] [ ] [ ] [ ] Source Embeddings Target Embeddings i want (cid:1)(cid:3)(cid:6) : target-to-source attention weights Linear & Softmax : target-side vocabulary probability distribution (1 g pred )* P copy P copy P predict g pred * P predict probability distribution over source-side words and target-side vocabulary Decoder Layer n [ ] h 4,1 [ ] a [ ] a (cid:1)(cid:3)(cid:6) (cid:1)(cid:3)(cid:6) (cid:5)(cid:2)(cid:4)(cid:6) (cid:1)(cid:3)(cid:6) (cid:5)(cid:2)(cid:4)(cid:6) i want a Figure 2: Shared embeddings and pointer network represent the source sentence, { y 1 , y 2 , ..., y t } represent target words.",
"Shared target embeddings enforces the correspondence between source-side and target-side expressions on the embedding level.",
"As shown in Figure 2, during encoding, source-side target word embeddings are identical to their embeddings in the target-side vocabulary embedding matrix.",
"This makes it easier for the model to copy source-side target words to the output.",
"To strengthen copying through locating source-side target words, we integrate pointer network (Gulcehre et al., 2016) into the decoder, as shown in Figure 2. At each decoding time step t , the target-to-source attention weights t, 1 , ..., t,m are utilized as a probability distribution P copy , which models the probability of copying a word from the i -th source-side position.",
"The i -th source-side position may represent a source-side word or a source-side target word.",
"P copy is added to P predict , the probability distribution over target-side vocabulary, to gain a new distribution over both the source and the target side vocabulary 4 : P = (1 g pred ) P copy + g pred P predict , (2) where g pred is used to control the contribution of two probability distributions.",
"For time step t , g pred is calculated from the context vector c t and the current hidden state of the decoder's last layer s t,n : 4 For the words which belong to the source-side vocabulary but are not appeared in the source-side sentence, the probabilities are set to 0.",
"g pred = ( c t W p + s t,n W q + b r ) , (3) where W p , W q , and b r are parameters trained and is the sigmoid function.",
"In addition, the context vector c t is calculated as c t = mi =1 t,i h i,n , where t,i is attention weight mentioned earlier.",
"{ h 1 ,n , h 2 ,n , ..., h m,n } are the source-side hidden states of the encoder's last layer.",
"We compare our method with strong baselines on large-scale En-Ru and Ch-En tasks on various test sets across different domains, using a strongly optimized Transformer (Vaswani et al., 2017).",
"BLEU (Papineni et al., 2002) is used for evaluation.",
"Our training corpora are taken from the WMT2018 news translation task.",
"En-Ru.",
"We use 13.88M sentences as baseline training data, containing both a real bilingual corpus and a synthetic back-translation corpus (Sennrich et al., 2015a).",
"The synthetic corpus is translated from NewsCommonCrawl, which can be obtained from the WMT task.",
"The news domain contains four different test sets published by WMT2018 over the recent years, namely news2015, news2016, news2017, and news2018, respectively, each having one reference.",
"The e-commerce domain contains four files totalling 7169 sentences, namely subject17, desc17, subject18, and desc18, respectively, each having one reference.",
"The sentences are extracted from e-commerce websites, in which subjects are the goods names shown on a listing page.",
"descs refer to information in a commodity's description page.",
"subject17 and desc17 are released 5 .",
"Our development set is news2015.",
"Ch-En.",
"We use 7.42M sentences as our baseline training data, containing both real bilingual corpus and synthetic back-translation corpus (Sen-nrich et al., 2015a).",
"We use seven public development and test data sets, four in the news domain, namely NIST02, NIST03, NIST04, NIST05, respectively, each with four references, and three in the spoken language domain, namely 5 https://github.com/batman2013/ e-commerce_test_sets CSTAR03, IWSLT2004, IWLST2005, respectively, each with 16 references.",
"We use six self-attention layers for both the encoder and the decoder.",
"The embedding size and the hidden size are set to 512.",
"Eight heads are used for self-attention.",
"A feed-forward layer with 2048 cells and Swish (Ramachandran et al., 2018) is used as the activation function.",
"Adam (Kingma and Ba, 2014) is used for training; warmup step is 16000; the learning rate is 0.0003.",
"We use label smoothing (Junczys-Dowmunt et al., 2016) with a confidence score of 0.9, and all the drop-out (Gal and Ghahramani, 2016) probabilities are set to 0.1.",
"We extract a SMT phrase table on the bilingual training corpus by using moses (Koehn et al., 2007) with default setting, which is used for matching sentence pairs to generate augmented training data.",
"We apply count-based pruning (Zens et al., 2012) to the phrase table, the threshold is set to 10.",
"During decoding , similar to Hasler et al. (2018), Alkhouli et al. (2018) and Post and Vilar (2018), we make use of references to obtain gold constraints.",
"Following previous work, pre-specified translations for each source sentence are sampled from references and used by all systems for fair comparison.",
"In all the baseline systems, the vocabulary size is set to 50K on both sides.",
"For Data augmenta-tion, to allow the source-side dictionary to cover target-side words, the targetand source-side vocabularies are merged for a new source vocabulary.",
"For Shared embeddings, the source vocabulary remains the same as the baselines, where the source-side target words use embeddings from target-side vocabulary.",
"We use an in-house reimplementation of Transformer, similar to Google's Tensor2Tensor.",
"For the baselines, we reimplement Crego et al. (2016), as well as Post and Vilar (2018).",
"BPE (Sennrich et al., 2015b) is used for all experiments, the operation is set to 50K.",
"Our test sets cover news and e-commerce domains on En-Ru, and news and spoken language domains on Ch-En.",
"Baseline 1: Using Placeholder.",
"We combine Luong et al. (2014) and Crego et al. (2016).",
"generating placeholder tags during training, following Crego et al. (2016), we use a named entity translation dictionary which is extracted from Wikidata 6 .",
"The dictionary is released together with e-commerce test sets, which is mentioned before.",
"For Ch-En, the dictionary contains 285K person names, 746K location names and 1.6K organization names.",
"For En-Ru, the dictionary contains 471K person names, 254K location names and 1.5K organization names.",
"Additionally, we manually corrected a dictionary which contains 142K brand names and product names translation for En-Ru.",
"By further leveraging word alignment in the same way as Luong et al. (2014), the placeholder tags are annotated with indices.",
"We use FastAlign (Dyer et al., 2013) to generate word alignment.",
"The amount of sentences containing placeholder tags is controlled to a ratio of 5% of the corpus.",
"During decoding, pre-specified translations described in Section 5.2 are used.",
"Baseline 2: Lexical Constraints.",
"We reimplement Post and Vilar (2018), integrating their algorithm into our Transformer.",
"Target-side words or phrases of pre-specified translations mentioned in Section 5.2 are used as lexical constraints.",
"6 https://www.wikidata.org Our System.",
"During training, we use the method described in Section 3.1 to obtain the augmented training data.",
"The SMT phrase table mentioned in Section 5.2 is used for Indexing and Sampling.",
"During decoding, pre-specified translations mentioned in Section 5.2 are used.",
"The augmented data contain sampled sentences with one or two replacements on the source side.",
"By applying the two sampling steps described in Section 3.1, about 10M and 6M augmented Ch-En and En-Ru sentences are generated, respectively.",
"The final training corpora consists of both the augmented training data and the original training data.",
"Comparison with Baselines.",
"Our Transformer implementation can give comparable performance with state-of-the-art NMT (Junczys-Dowmunt et al., 2018), see Transformer and Marian in Table 1, which also shows a comparison of different methods on En-Ru.",
"The lexical constraint method gives improvements on both the news and the e-commerce domains, compared with the Transformer baseline.",
"The placeholder method also gives an improvement on the e-commerce Figure 3: Sample outputs.",
"domain.",
"The average improvement is calculated over all the test set results in each domain.",
"In the news domain, the average improvement of our method is 3.48 BLEU higher compared with placeholder , and 2.94 over lexical constraints .",
"In the e-commerce domain, the average improvement of our method is 1.34 BLEU compared with placeholder , and 2.63 with lexical constraints .",
"Both shared embedding and pointer network are effective.",
"Table 2 shows the same comparison on Ch-En.",
"In the spoken language domain, the average improvement is 1.35 BLEU compared with placeholder , and 0.42 with lexical constraints .",
"In the news domain, the average improvement is 1.38 BLEU compared with placeholder , and 0.74 with lexical constraints .",
"We find that the placeholder method can only bring improvements on the En-Ru e-commerce test sets, since the pre-specified translations of the four e-commerce test sets are mostly entities , such as brand names or product names.",
"Using placeholder tags to represent these entities leads to relatively little loss of word meaning.",
"But on many of the other test sets, pre-specified translations are mostly vocabulary words.",
"The placeholder tags fail to keep their word meaning during translation, leading to lower results.",
"The speed contrast between unconstrained NMT, lexical constraint and our method is shown in Table 3. The decoding speed of our method is equal to unconstrained NMT, and faster than the lexical constraint method, which confirms our in-Beam Size 5 10 20 30 Unconstrained & Ours 416 312 199 146 Lexical Constraint 102 108 74 50 Table 3: Decoding speed (words/sec), Ch-En dev set.",
"Sample Outputs.",
"Figure 3 gives a comparison of different system's translations.",
"Given a Chinese source sentence, the baseline system fails to translate adequately, as family plan-ning is not a correct translation of .",
"In the pre-specified methods, the correct translation ( to planned parenthood) is achieved through different ways.",
"For the placeholder method, the source phrase is replaced with the placeholder tag tag 1 during pre-processing.",
"After translation, output tag 1 is replaced with planned parenthood as a post-processing step.",
"However, the underlined word program is generated before planned parenthood, which has no relationship with any source-side word.",
"The source-side word , which means association, is omitted in translation.",
"Through deeper analysis, the specific phrase program tag 1 occurs frequently in the training data.",
"During decoding, using the hard tag leads to the loss of the source phrase's original meaning.",
"As a result, the word program is incorrectly generated along with tag 1 .",
"The lexical constraints method regards the tar-Figure 4: Increased BLEU on Ch-En test sets.",
"get side of the pre-specified translation as a lexical constraint.",
"Here the altered beam search algorithm fails to predict the constraint planned parenthood during previous decoding steps.",
"Although the constraint finally comes into effect, over translation occurs, which is highlighted by the underlined words.",
"This is because the method enforces hard constraints, preventing decoding to stop until all constraints are met.",
"Our method makes use of pre-specified translation by replacing the source-side phrase with the target-side translation planned parenthood, copying the desired phrase to the output along with the decoding procedure.",
"The translation association of planned parenthood from providing is the exact translation of the source-side phrase (planned) (parenthood) (association) (providing), and agrees with the reference, planned parenthood to provide.",
"Effect of Using More Pre-specified Translations.",
"Even though the augmented training data have only one or two replacements on the source side, the model can translate a source sentence with up to five replacements.",
"Figure 4 shows that compared with unconstrained Transformer, the translation quality of our method keeps increasing when the number of replacements increases, since more pre-specified translations are used.",
"We additionally measure the effect on the Ch-En WMT test sets, namely newsdev2017, new-stest2017, newstest2018, respectively, each having only one reference instead of four.",
"The baseline BLEU scores on these three test sets are 18.49, 20.01 and 19.05, respectively.",
"Our method gives BLEU scores of 20.56, 22.3, 21.08, respectively, when using one or two pre-specified translations for each sentence.",
"The increased BLEU when utilizing different number of pre-specified translations is shown in Figure 4. We found that the improvements on WMT test sets are more sig-nificant than on NIST, since pre-specified translations are sampled from one reference only, enforcing the output to match this reference.",
"The placeholder method does not give consistent improvements on news test sets, due to the same reason as mentioned earlier.",
"As shown in Figure 5, the copy success rate of our method does not decrease significantly when the number of replacements grows.",
"Here, a copy success refers a pre-specified target translation that can occur in the output.",
"The placeholder method achieves a higher copy success rate than ours when the number of replacements is 1, but the copy success rate decreases when using more pre-specified translations.",
"The copy success rate of the lexical constraint method is always 100%, since it imposes hard constraints rather than soft constraints.",
"However, as discussed earlier, overall translation quality can be harmed as a cost of satisfying decoding constraints by their method.",
"In the presented experiment results, the highest copy success rate of our method is 90.54%, which means a number of source-side target words or phrases are not successfully copied to the translation output.",
"This may be caused by the lack of training samples for certain target-side words or phrases.",
"In En-Ru, we additionally train a model with augmented data that is obtained by matching NIST02 NIST03 NIST04 NIST05 Data Aug. 83.89% 85.71% 86.71% 87.45% +Share&Point 87.72% 88.31% 89.18% 90.54% Table 4: Copy success rate on Ch-En test sets.",
"an SMT phrase table without any pruning strategy.",
"The copy success rate can reach 98%, even without using shared embedding and pointer net-work methods.",
"Effect of Shared Embeddings and Pointer Network.",
"The gains of shared embeddings and pointer network are reflected in both the copy success rate and translation quality.",
"As shown in Table 4, when using one pre-specified translation for each source sentence, the copy success rate improves on various test sets by integrating shared embeddings and pointer network, demonstrating that more pre-specified translations come into effect.",
"Table 1 and Table 2 earlier show the improvement of translation quality.",
"Translating non Code-Switched Sentences.",
"Our method preserves its strength on translating non code-switched sentences.",
"As shown in Table 5, the model trained on the augmented corpus has comparable strength on translating unreplaced sentences as the model trained on the original corpus.",
"In addition, on some test sets, our method is slightly better than the baseline when translating non code-switched source sentences.",
"This can be explained from two aspects: First, the augmented data make the model more robust to perturbed inputs; Second, the pointer network makes the model better by copying certain source-side words (Gulcehre et al., 2016), such as non-transliterated named entities.",
"We investigated a data augmentation method for constraining NMT with pre-specified translations, utilizing code-switched source sentences and their translations as augmented training data.",
"Our method allows the model to learn to translate source-side target phrases by copying them to the output, achieving consistent improvements over previous lexical constraint methods on large NMT test sets.",
"To the best of our knowledge, we are the first to leverage code switching for NMT with pre-specified translations.",
"In the future, we will study how the copy success rate and the BLEU scores interact when different sampling strategies are taken to obtain augmented training corpus and when the amount of augmented data grows.",
"Another direction is to validate the performance when applying this approach to language pairs that contain a number of identical letters in their alphabets, such as English to French and English to Italian.",
"We thank the anonymous reviewers for their detailed and constructed comments.",
"Yue Zhang is the corresponding author.",
"The research work is supported by the National Natural Science Foundation of China (61525205).",
"Thanks for Shao-hui Kuang, Qian Cao, Zhongqiang Huang and Fei Huang for their useful discussion."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"method",
"result",
"result",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Knowledge Bases (KBs) require constant updating to reflect changes to the world they represent.",
"For general purpose KBs, this is often done through Relation Extraction (RE), the task of predicting KB relations expressed in text mentioning entities known to the KB.",
"One way to improve RE is to use KB Embeddings (KBE) for link prediction.",
"However, despite clear connections between RE and KBE, lit-tle has been done toward properly unifying these models systematically.",
"We help close the gap with a framework that unifies the learning of RE and KBE models leading to significant improvements over the state-of-the-art in RE.",
"The code is available at https://github.",
"com/billy-inn/HRERE .",
"Knowledge Bases (KBs) contain structured information about the world and are used in support of many natural language processing applications such as semantic search and question answering.",
"Building KBs is a never-ending challenge because, as the world changes, new knowledge needs to be harvested while old knowledge needs to be revised.",
"This motivates the work on the Relation Extraction (RE) task, whose goal is to assign a KB relation to a phrase connecting a pair of entities, which in turn can be used for updating the KB.",
"The state-of-the-art in RE builds on neural models using distant (a.k.a. weak) supervision (Mintz et al., 2009) on large-scale corpora for training.",
"A task related to RE is that of Knowledge Base Embedding (KBE), which is concerned with representing KB entities and relations in a vector space for predicting missing links in the graph.",
"Aiming to leverage the similarities between these tasks, Weston et al. (2013) were the first to show that combining predictions from RE and KBE models was beneficial for RE.",
"However, the way in which they combine RE and KBE predictions is rather naive (namely, by adding those scores).",
"To the best of our knowledge, there have been no systematic attempts to further unify RE and KBE, particularly during model training .",
"We seek to close this gap with HRERE (Het-erogeneous REpresentations for neural Relation Extraction), a novel neural RE framework that learns language and knowledge representations jointly .",
"Figure 1 gives an overview.",
"HRERE 's backbone is a bi-directional long short term memory (LSTM) network with multiple levels of attention to learn representations of text expressing relations.",
"The knowledge representation machinery, borrowed from ComplEx (Trouillon et al., 2016), nudges the language model to agree with facts in the KB.",
"Joint learning is guided by three loss functions: one for the language representation, another for the knowledge representation, and a third one to ensure these representations do not diverge.",
"In effect, this contributes to HRERE 's generalization power by preventing over-fitting by either model.",
"We build on state-of-the-art methods for learning the separate RE and KBE representations and on learning tools that allow us to scale to a moderately large training corpus.",
"(We use a subset of Freebase with 3M entities as our KB.)",
"We validate our approach on an established benchmark against state-of-the-art methods for RE, observing not only that our base model significantly outperforms previous methods, but also the fact that jointly learning the heterogeneous representations consistently brings in improvements.",
"To the best of our knowledge, ours is the first principled framework to combine and jointly learn heterogeneous representations from both language and knowledge for the RE task.",
"Contributions.",
"This paper describes and evaluates a novel neural framework for jointly learning Figure 1: Workflow of the proposed framework.",
"representations for RE and KBE tasks that uses a cross-entropy loss function to ensure both representations are learned together, resulting in significant improvements over the current state-of-the-art for the RE task.",
"Recent neural models have been shown superior to approaches using hand-crafted features for the RE task.",
"Among the pioneers, Zeng et al. (2015) proposed a piecewise convolutional network with multi-instance learning to handle weakly labeled text mentions.",
"Recurrent neural networks (RNN) are another popular architecture (Wu et al., 2017).",
"Similar fast progress has been seen for the KBE task for representing entities and relations in KBs with vectors or matrices.",
"Bordes et al. (2013) introduced the influential translation-based embeddings (TransE), while Yang et al. (2014) leveraged latent matrix factorization in their DistMult method.",
"We build on ComplEx (Trouillon et al., 2016), which extends DistMult into the complex space and has been shown significantly better on several benchmarks.",
"Weston et al. (2013) were the first to connect RE and KBE models for the RE task.",
"Their simple idea was to train the two models independently and only combine them at inference time.",
"While they showed that combining the two models is better than using the RE model alone, newer and better models since then have obviated the net gains of such a simple strategy (Xu and Barbosa, 2018).",
"We propose a much tighter integration of RE and KBE models: we not only use them for prediction, but also train them together, thus mutually reinforcing one another.",
"Recently, many methods have been proposed to use information from KBs to facilitate relation extraction.",
"Sorokin and Gurevych (2017) considered other relations in the sentential context while predicting the target relation.",
"Vashishth et al. (2018) utilized additional side information from KBs for improved RE.",
"However, these methods didn't leverage KBE method to unify RE and KBE in a principled way.",
"Han et al. (2018) used a mutual attention between KBs and text to perform better on both RE and KBE, but their method was still based on TransE (Bordes et al., 2013) which can not fully exploit the advantage of the information from KBs.",
"The goal in the task of Relation Extraction is to predict a KB relation that holds for a pair of entities given a set of sentences mentioning them (or NA if no such relation exists).",
"The input is a KB with relation set R , a set of relations of interest R , R R , and an automatically labelled training dataset D obtained via distant supervision.",
"Given a sentence mentioning entities h, t , the output is a relation r R that holds for h, t or the catch-all relation NA if no such r exists.",
"customary, we denote a KB with relation scheme R as a set of triples T = { ( h, r, t ) E R E } , where E is the set of entities of interest.",
"Distant supervision exploits the KB to automatically annotate sentences in a corpus containing mentions of entities with the relations they participate in.",
"Formally, a labeled dataset for relation extraction consists of fact triples { ( h i , r i , t i ) } Ni =1 and a multi-set of extracted sentences for each triple {S i } Ni =1 , such that each sentence s S i mentions both the head entity h i and the tail entity t i .",
"task is to estimate the probability of each relation in R { NA } .",
"Formally, for each relation r , we want to predict P ( r | h, t, S ) .",
"In practice, the input set of sentences S can have arbitrary size.",
"For the sake of computational effi-ciency, we normalize the set size to a fixed number T by splitting large sets and oversampling small ones.",
"We also restrict the length of each sentence in the set by a constant L by truncating long sentences and padding short ones.",
"We now go over the details of our framework outlined in Figure 1 for unifying the learning of the language and the knowledge representations used for relation extraction.",
"In a nutshell, we use LSTM with attention mechanisms for language representation and we follow the approach of Trouillon et al. (2016) for KB embedding.",
"Input Representation.",
"For each word token, we use pretrained word embeddings and randomly initialized position embeddings (Zeng et al., 2014) to project it into ( d w + d p ) -dimensional space, where d w is the size of word embedding and d p is the size of position embedding.",
"Sentence Encoder.",
"For each sentence s i , we apply a non-linear transformation to the vector representation of s i to derive a feature vector z i = f ( s i ; ) given a set of parameters .",
"In this paper, we adopt bidirectional LSTM with d s hidden units as f ( s i ; ) (Zhou et al., 2016).",
"Multi-level Attention Mechanisms.",
"We employ attention mechanisms at both word-level and sentence-level to allow the model to softly select the most informative words and sentences during training (Zhou et al., 2016; Lin et al., 2016).",
"With the learned language representation s L , the conditional probability p ( r |S ; ( L ) ) is computed through a softmax layer, where ( L ) is the parameters of the model to learn language representation.",
"Following the score function and training procedure of Trouillon et al. (2016), we can get the knowledge representations e h , w r , e t C d k .",
"With the knowledge representations and the scoring function, we can obtain the conditional probability p ( r | ( h, t ); ( G ) ) for each relation r : p ( r | ( h, t ); ( G ) ) = e ( e h ,w r ,e t ) (cid:80) r (cid:48) R{ NA } e ( e h ,w r (cid:48) ,e t ) where ( G ) corresponds to the knowledge representations e h , w r , e t C d k .",
"Since NA / R , we use a randomized complex vector as w NA .",
"As stated, this paper seeks an elegant way of connecting language and knowledge representations for the RE task.",
"In order to achieve that, we use separate loss functions (recall Figure 1) to guide the language and knowledge representation learning and a third loss function that ties the predictions of these models thus nudging the parameters towards agreement.",
"The cross-entropy losses based on the language and knowledge representations are defined as: JL = 1 NN (cid:88) i =1 log p ( r i |S i ; ( L ) ) (1) JG = 1 NN (cid:88) i =1 log p ( r i | ( h i , t i ); ( G ) ) (2) where N denotes the size of the training set.",
"Finally, we use a cross-entropy loss to measure the dissimilarity between two distributions, thus connecting them, and formulate model learning as minimizing JD : JD = 1 NN (cid:88) i =1 log p ( r i |S i ; ( L ) ) (3) where r i = arg max r R{ NA } p ( r | ( h i , t i ); ( G ) ) .",
"Based on Eq.",
"1, 2, 3, we form the joint optimization problem for model parameters as min J = JL + JG + JD + (cid:107) (cid:107) 22 (4) where = ( L ) ( G ) .",
"The knowledge representations are first trained on the whole KB independently and then used as the initialization for the joint learning.",
"We adopt the stochastic gradient descent with mini-batches and Adam (Kingma and Ba, 2014) to update , employing different learning rates lr 1 and lr 2 on ( L ) and ( G ) respectively 4.5 Relation Inference In order to get the conditional probability p ( r | ( h, t ) , S ; ) , we use the weighed average to combine the two distribution p ( r |S ; ( L ) ) and p ( r | ( h, t ); ( G ) ) : p ( r | ( h, t ) , S ; ) = p ( r |S ; ( L ) ) +(1 ) p ( r | ( h, t ); ( G ) ) .",
"where is the combining weight of the weighted average.",
"Then, the predicted relation r is r = argmax r R{ NA } p ( r | ( h, t ) , S ; ) .",
"Datasets.",
"We evaluate our model on the widely used NYT dataset (Riedel et al., 2010) by aligning Freebase relations mentioned in the New York Times Corpus.",
"Articles from years 2005-2006 are used for training while articles from 2007 are used for testing.",
"As our KB, we used a Freebase subset with the 3M entities with highest degree (i.e., participating in most relations).",
"Moreover, to prevent the knowledge representation from memorizing the true relations for entity pairs in the test set, we removed all entity pairs present in the NYT.",
"Evaluation Protocol: Following previous work (Mintz et al., 2009), we evaluate our model using held-out evaluation which approximately measures the precision without time-consuming manual evaluation.",
"We report both Precision/Recall curves and Precision@N (P@N) in our experiments, ignoring the probability predicted for the NA relation.",
"Moreover, to evaluate each sentence in the test set as in previous methods, we append T copies of each sentence into S for each testing sample.",
"Word Embeddings: In this paper, we used the freely available 300-dimensional pre-trained word embeddings distributed by Pennington et al. (2014) to help the model generalize to words not appearing in the training set.",
"Hyperparameter Settings: For hyperparameter tuning, we randonly sampled 10% of the training set as a development set.",
"All the hyperparameters were obtained by evaluating the model on the development set.",
"With the well-tuned hyperparameter setting, we run each model five times on the whole training set and report the average P@N.",
"For Precision/Recall curves, we just select the results from the first run of each model.",
"For training, learning rate on ( L ) lr 1 5 10 4 learning rate on ( K ) lr 2 1 10 5 size of word position embedding d p 25 state size for LSTM layers d s 320 input dropout keep probability p i 0 .",
"Methods Evaluated.",
"We study three variants of our framework: (1) HRERE -base : basic neural model with local loss JL only; (2) HRERE -naive : neural model with both local loss JL and global loss JG but without the dissimilarities JD ; (3) HRERE -full : neural model with both local and global loss along with their dissimilarities.",
"We compare against two previous state-of-the-art neural models, CNN+ATT and PCNN+ATT (Lin et al., 2016).",
"We also implement a baseline Weston based on the strategy following Weston et al. (2013), namely to combine the scores computed Relation Textual Mention base naive full contains Much of the middle east tension stems from the sense that shiite power is growing, led by Iran .",
"with the methods stated in this paper directly without joint learning.",
"Analysis.",
"Figure 2 shows the Precision/Recall curves for all the above methods.",
"As one can see, HRERE -base significantly outperforms previous state-of-the-art neural models and Weston over the entire range of recall.",
"However, HRERE base performs worst compared to all other variants, while HRERE -full always performs best as shown in Figure 2 and Table",
"2. This suggests that introducing knowledge representation consistently results in improvements, which validates our motivating hypothesis.",
"HRERE -naive simply optimizes both local and global loss at the same time without attempting to connect them.",
"We can see that HRERE -full is not only consistently superior but also more stable than HRERE -naive when the recall is less than 0.1.",
"One possible reason for the instability is that the results may be dominated by one of the representations and biased toward it.",
"This suggests that (1) jointly learning the heterogeneous representations bring mutual benefits which are out of reach of previous methods that learn each independently; (2) connecting heterogeneous representations can increase the robustness of the framework.",
"Case Study.",
"Table 3 shows two examples in the testing data.",
"For each example, we show the relation, the sentence along with entity mentions and the corresponding probabilities predicted by HRERE -base and HRERE -full .",
"The entity pairs in the sentence are highlighted with bold formatting.",
"From the table, we have the following observations: (1) The predicted probabilities of three variants of our model in the table match the observations and corroborate our analysis.",
"(2) From the text of the two sentences, we can easily infer that middle east contains Iran and Henry Fonda was born in Omaha .",
"However, HRERE -base fails to detect these relations, suggesting that it is hard for models based on language representations alone to detect implicit relations, which is reasonable to expect.",
"With the help of KBE, the model can effectively identify implicit relations present in the text.",
"(3) It may happen that the relation cannot be inferred by the text as shown in the last example.",
"It's a common wrong labeled case caused by distant supervision.",
"It is a case of an incorrectly labeled instance, a typical occurrence in distant supervision.",
"However, the fact is obviously true in the KBs.",
"As a result, HRERE -full gives the underlying relation according to the KBs.",
"This observation may point to one direction of de-noising weakly labeled textual mentions generated by distant supervision.",
"This paper describes an elegant neural framework for jointly learning heterogeneous representations from text and from facts in an existing knowledge base.",
"Contrary to previous work that learn the two disparate representations independently and use simple schemes to integrate predictions from each model, we introduce a novel framework using an elegant loss function that allows the proper connection between the the heterogeneous representations to be learned seamlessly during training.",
"Experimental results demonstrate that the proposed framework outperforms previous strategies to combine heterogeneous representations and the state-of-the-art for the RE task.",
"A closer inspection of our results show that our framework enables both independent models to enhance each other.",
"This work was supported in part by grants from the Natural Sciences and Engineering Research Council of Canada and a gift from Diffbot Inc."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"other"
] |
[
"It is very common to use quotations (quotes) to make our writings more elegant or convincing.",
"To help people find appropriate quotes efficiently, the task of quote recommendation is presented, aiming to recommend quotes that fit the current context of writing.",
"There have been various quote recommendation approaches, but they are evaluated on different unpublished datasets.",
"To facilitate the research on this task, we build a large and fully open quote recommendation dataset called QuoteR, which comprises three parts including English, standard Chinese and classical Chinese.",
"Any part of it is larger than previous unpublished counterparts.",
"We conduct an extensive evaluation of existing quote recommendation methods on QuoteR.",
"Furthermore, we propose a new quote recommendation model that significantly outperforms previous methods on all three parts of QuoteR.",
"All the code and data of this paper can be obtained at https://github.com/ thunlp/QuoteR .",
"A quotation, or quote for short, is a sequence of words that someone else has said or written.",
"Quotes, especially the famous quotes including proverbs, maxims and other famous sayings, are quite useful in writing they can not only help illuminate and emphasize the meaning we want to convey, but also endow our writing with elegance and credibility (Cole, 2008).",
"As a result, the use of quotes is very common and, moreover, universal among all languages.",
"However, it is not an easy job for ordinary people to promptly come up with appropriate quotes that fit the current context of writing, due to the huge number of quotes.",
"Search engines can provide some help in finding quotes by keyword matching, but it is often not enough.",
"Quotes generally Equal contribution Corresponding author.",
"There's an old Bible verse my dad used to say all the time that says sufficient unto the day is the evil thereof , Pyron said.",
"In other words today has its own set of problems, we can't do anything about yesterday, and I don't want to jump too far into tomorrow.",
"express their meanings implicitly by rhetorical devices like metaphor and have different word usages from modern and everyday writing, as illustrated in Figure 1, for which quote search based on keyword matching is ineffective.",
"In addition, some quote repository websites organize quotes by topic.",
"However, even after filtering by topic, there are still too many candidate quotes, and selecting a suitable one remains time-consuming.",
"To tackle these challenges, Tan et al. (2015) introduce the task of quote recommendation, aiming to automatically recommend suitable quotes given the context of writing.",
"1 Afterward, a series of studies propose various approaches to this task (Ahn et al., 2016; Tan et al., 2016, 2018).",
"However, these studies use different evaluation datasets, and none of them are publicly available.",
"The lack of a standard and open dataset is undoubtedly a serious obstacle to the quote recommendation research.",
"In this paper, to solve this problem, we build a large quote recommendation dataset that is publicly available.",
"This dataset is named QuoteR (abbreviated from Quote R ecommendataion) and composed of three parts: (1) the English part that comprises 6,108 English quotes with 126,713 contexts; (2) the standard Chinese (Mandarin) part, which contains 3,004 standard Chinese quotes with 40,842 contexts; and (3) the classical Chinese (Wenyan) part, which comprises 4,438 classical Chinese quotes (including classical poems) and 116,537 contexts.",
"Any part of this dataset is abso-1 This task also has great value to research, as a touchstone for NLP models' abilities in language understanding, semantic matching and linguistic coherence estimation.",
"We conduct a fair and extensive evaluation of existing quote recommendation methods on QuoteR with a thorough set of metrics.",
"By analyzing these methods and their evaluation results, we find two weaknesses of these methods and propose a new method by making corresponding improvements, which we believe would serve as a strong baseline for quote recommendation.",
"First, most existing methods encode contexts and quotes into vectors for quote-context matching, using LSTM (Hochreiter and Schmidhuber, 1997) or CNN (Kim, 2014) as the encoders.",
"These encoders have proven inferior to the pre-trained language models like BERT (Devlin et al., 2019), which limits the final quote recommendation performance.",
"Therefore, we try to utilize a pre-trained language model, specifically BERT, as the sentence encoders to learn representations of quotes and contexts.",
"Considering the huge compute resulting from the large scale of the dataset and the BERT model, it is nontrivial to train the context and quote encoders simultaneously.",
"We design an ingenious training strategy to address this issue.",
"Second, it is harder to learn good representations for quotes compared with contexts, because most quotes are quite pithy, and their words usually carry rich semantics, as shown in Figure",
"1. Existing methods, however, do not address this challenge well.",
"To handle this challenge, we incorporate a kind of general lexical knowledge, namely sememes , into the quote encoder, aiming to improve the representations of quotes.",
"A sememe is defined as the minimum semantic unit in linguistics (Bloomfield, 1926), and the sememes of a word atomically interpret the meaning of the word.",
"Incorporating sememes can bring more semantic information for quote representation learning and conduce to a better quote vector.",
"In experiments, we demonstrate that both the utilization of BERT and the incorporation of sememes substantially improve quote recommendation performance.",
"And the sememe-incorporated BERT-based model significantly outperforms all previous methods on QuoteR.",
"Moroever, ablation and case studies as well as human evaluation further prove its effectiveness.",
"To conclude, our contributions are threefold: (1) building a large and the first open quote recommendation dataset; (2) conducting an extensive and fair evaluation of existing quote recommendation methods; (3) proposing a quote recommendation model that outperforms all previous methods and can serve as a strong baseline for future research.",
"The task of quote recommendation is originally presented in Tan et al. (2015).",
"They propose a learning-to-rank framework for this task, which integrates 16 hand-crafted features.",
"Tan et al. (2016) and Tan et al. (2018) introduce neural networks to the quote recommendation task.",
"They use LSTMs to learn distributed vector representations of contexts and quotes and conduct sentence matching with these vectors.",
"Ahn et al. (2016) combine four different quote recommendation approaches including matching granularity adjustment (a statistical context-quote relevance prediction method), random forest, CNN and LSTM.",
"In addition quote recommendation for writing, some studies focus on recommending quotes in dialog.",
"Lee et al. (2016) propose an LSTM-CNN combination model to recommend quotes according to Twitter dialog threads, i.e., sequences of linked tweets.",
"Wang et al. (2020) utilize an encoder-decoder framework to generate quotes as response, based on the separate modeling of the dialog history and current query.",
"Wang et al. (2021) adopt a semantic matching fashion, which encodes the multiturn dialog history with Transformer (Vaswani et al., 2017) and GRU (Cho et al., 2014) and encodes the quote with Transformer.",
"In terms of the datasets of quote recommendation for writing, Tan et al. (2015) construct an English dataset comprising 3,158 quotes and 64,323 contexts extracted from e-books in Project Gutenberg.",
"2 Ahn et al. (2016) build a similar English dataset that contains 400 most frequent quotes with contexts from e-books in Project Gutenberg and blogs.",
"Tan et al. (2018) build a classical Chinese poetry quotation dataset that comprises over 9,000 poem sentences with 56,949 contexts extracted from Chinese e-books on the Internet.",
"Unfortunately, all these datasets are not publicly available.",
"Quote recommendation is essentially a kind of content-based recommendation task (Pazzani and Billsus, 2007), which is aimed at recommending",
"products to users according to product descriptions and users' profiles.",
"A closely related and widely studied task is content-based citation recommendation (Strohman et al., 2007), especially local citation recommendation that recommends related papers given a particular context of academic writing (He et al., 2010; Huang et al., 2012, 2015).",
"Compared with quote recommendation, this task is targeted at structured documents (papers), which are much longer and possess abundant information such as title, abstract and citation relations that are useful for recommendation.",
"Quotes are shorter and usually have no available information except the text, which renders quote recommendation more challenging.",
"Another highly related but niche task is idiom recommendation (Liu et al., 2018, 2019), which aims to recommend appropriate idioms for a given context.",
"Existing idiom recommendation methods are essentially covered by the quote recommendation methods described in 2.1.",
"Liu et al. (2018) recommend idioms by learning representations of the contexts and idioms, similar to the context-quote relevance-based quote recommendation methods (Ahn et al., 2016; Tan et al., 2018).",
"The difference lies in the use of word embeddings of idioms rather than a sentence encoder.",
"Liu et al. (2019) regard idiom recommendation as a context-to-idiom machine translation problem and use an LSTM-based encoder-decoder framework, which is similar to Wang et al. (2020).",
"In addition to quote recommendation, there are some other quote-related tasks.",
"For example, quote detection (or recognition) that is aimed at locating spans of quotes in text (Pouliquen et al., 2007; Scheible et al., 2016; Pareti et al., 2013; Papay and Pad, 2019), and quote attribution that intends to automatically attribute quotes to speakers in the text (Elson and McKeown, 2010; O'Keefe et al., 2012; Almeida et al., 2014; Muzny et al., 2017).",
"Different from quote recommendation that focuses on famous quotes, these tasks mainly deal with the general quotes of utterance.",
"Before describing our dataset and model, we first formulate the task of quote recommendation for writing and introduce several basic concepts, most of which follow previous work (Tan et al., 2015).",
"For a piece of text containing a quote q , the text segment occurring before the quote is named left context c l while the text segment occurring after the quote is named right context c r .",
"The concatenation of left and right contexts form the quote context c = [ c l ; c r ] .",
"Suppose there is a quote set that comprises all the known candidate quotes Q = { q 1 , , q | Q | } , where | | denotes the cardinality of a set.",
"In the task of quote recommendation for writing, a query context c = [ c l ; c r ] is given, and the gold quote q c is wanted, where the query context is the context provided by the user and the gold quote is the quote in the quote set that fits the query context best.",
"Theoretically, a query context may have more than one gold quote because there are some quotes that convey almost the same meaning.",
"Following previous work (Tan et al., 2015; Ahn et al., 2016), for simplicity, we only regard the quote that actually appears together with the query context in corpora as the gold quote.",
"For a quote recommendation model, given the quote set Q , its input is a query context c = [ c l ; c r ] , and it is supposed to calculate a rank score for each candidate quote in Q and output a quote list according to the descending rank scores.",
"In this section, we present the building process and details of the QuoteR dataset.",
"We begin with the English part.",
"We choose the popular and free quote repository website Wikiquote 3 as the source of English quotes.",
"We download its official dump and extract over 60,000 English quotes in total to form the quote set.",
"We notice that previous work (Tan et al., 2015; Ahn et al., 2016) collects quotes from another website named Library of Quotes, but this website has closed down.",
"To obtain real contexts of quotes, we use three corpora.",
"The first is the Project Gutenberg corpus that previous studies use, which comprises over 50,000 e-books.",
"The second corpus is BookCorpus containing about 11,000 e-books (Zhu et al., 2015).",
"In addition to the two book corpora, we use the OpenWebText corpus (Gokaslan and Cohen, 2019) which is composed of text from web pages and has different text styles from books.",
"The total size of the raw text of the three corpora reaches 48.8 GB.",
"We search all the corpora for the occurrences of quotes in the quote set.",
"Some quotes are composed of multiple sentences, and only part of them are cited in some cases.",
"To cope with this situation, we split each quote into sentences using Stanza (Qi et al., 2020) and then search for each constituent sentence in the corpora.",
"If multiple constituent sentences of a quote appear sequentially, we combine them into an occurrence of the quote.",
"Compared with previous work that searches for quotes as a whole (Tan et al., 2015; Ahn et al., 2016), we can find more quote occurrences.",
"For each quote occurrence, we take the 40 words preceding and following it as its left and right contexts, respectively.",
"The concatenation of the left and right contexts forms a context, and a context and the corresponding quote form a context-quote pair.",
"We remove the repeated context-quote pairs and filter out the quotes appearing less than 5 times in the corpora.",
"To avoid dataset imbalance, we randomly select 200 context-quote pairs for a quote appearing more than 200 times and discard its other context-quote pairs.",
"Finally, we obtain 126,713 context-quote pairs involving 6,108 different quotes, which form the English part of QuoteR.",
"We split all the context-quote pairs into training, validation and test sets roughly in the ratio 8:1:1, making sure that all the quotes appear in the validation and test sets while 100 quotes do not appear in the training set.",
"We split the dataset in this way in order to observe how quote recommendation models perform in the zero-shot situation, where the model has never seen the gold quote of some vali-dation/test contexts during training.",
"The statistics of the final split dataset are listed in Table",
"1. 4.2 The Standard Chinese Part We gather standard Chinese quotes from a large quote collection website named Juzimi 4 .",
"More than 32,000 standard Chinese quotes are collected 4 https://www.juzimi.com/ altogether.",
"To obtain quote contexts, we use two corpora including a corpus composed of answer text from a Chinese QA website 5 and a large-scale book corpus that we specifically build and comprises over 8,000 free Chinese e-books.",
"The total size of the two corpora is about 32 GB.",
"Then we use the same method in building the English part to extract quote occurrences from the corpora.",
"Since Chinese is not naturally word-segmented, we take the 50 characters (rather than words) before and after a quote occurrence as the left and right contexts.",
"In addition, since there are fewer quotes and contexts for the standard Chinese part, we reduce the minimum number of occurrences for a selected quote to 3, and the maximum number of retained contexts per quote to 150.",
"After deduplication and filtering, we obtain the standard Chinese part of QuoteR, which has 40,842 context-quote pairs involving 3,004 quotes.",
"We split the standard Chinese part in the same way as the English part, and the statistics are also shown in Table",
"1. 4.3 The Classical Chinese Part Classical Chinese quotes, including classical poems and proverbs, are often cited in standard Chinese writing.",
"Considering that classical Chinese is very different from standard Chinese, we separate classical Chinese quotes from standard Chinese ones.",
"We collect over 17,000 classical Chinese quotes from Gushiwenwang, 6 a classical Chinese poetry and literature repository website, and aforementioned Juzimi.",
"7 Then we adopt the same way as standard Chinese to extract context-quote pairs from the two Chinese corpora and conduct deduplication and filtering.",
"Finally, we obtain the classical Chinese part of QuoteR that comprises 116,537 context-quote pairs of 4,438 quotes.",
"The statistics of this part after splitting are also in Table",
"1. 4.4 Quality Assessment by Human After the construction of QuoteR, we assess its quality by human.",
"For each part, we randomly sample 100 context-quote pairs, and ask three annotators to independently determine whether each quote fits the corresponding context.",
"The final results are 5 https://github.com/brightmart/nlp_ chinese_corpus 6 https://www.gushiwen.org/ 7 Juzimi provides the dates when the quotes appear so that we can distinguish classical and standard Chinese quotes.",
"obtained by voting.",
"Finally, 99/98/94 context-quote pairs are regard as suitable for the three parts, respectively.",
"The results verify the quality of QuoteR, which is expected because the data are extracted from high-quality corpora like books.",
"In this section, we elaborate on our proposed quote recommendation model.",
"This model is based on the representative pre-trained language model BERT (Devlin et al., 2019), but can be readily adapted to other pre-trained language models.",
"Similar to most previous methods (Tan et al., 2016; Ahn et al., 2016), we use BERT as the text encoder to learn vector representations of contexts and quotes, and then calculate the similarity between the representations of the query context and a candidate quote as the rank score of the quote.",
"We first obtain the representations of quotes.",
"Formally, for a candidate quote comprising m tokens q = { x 1 , , x m } Q , we feed it into BERT and obtain a series of hidden states: h q [C] , h q 1 , , h q m = BERT q ( [C] , x 1 , , x m ) , (1) where [C] denotes the special [CLS] token in BERT that is added to the front of a sequence.",
"Following Devlin et al. (2019), we use the hidden state of [C] as the representation of the quote: q = h q [C] .",
"The representations of all quotes form the quote representation matrix Q = [ q 1 , , q | Q | ].",
"We can use another BERT as the context encoder to obtain the representation of the query context c = [ c l ; c r ] .",
"Considering the context is composed of left and right contexts that are not naturally joined, we can insert an additional separator token between them before feeding them into BERT: h c [C] , = BERT c ( [C] , c l , [S] , c r ) , (2) where [S] is the sentence separator token [SEP] in BERT.",
"We can also use the hidden state of [C] as the representation of the context: c = h c [C] .",
"However, it is actually inconsistent with the general use of BERT.",
"Whether in pre-training or fine-tuning, when the input to BERT is two text segments connected by the separator token, the hidden state of [CLS] is only used to classify the relation between the two segments, e.g., to predict whether the second segment is the actual next sentence of the first segment in the next sentence prediction (NSP) pre-training task (Devlin et al., 2019).",
"We turn to another pre-training task of BERT, masked language modeling (MLM), which is a cloze task (Taylor, 1953) aimed at predicting masked tokens.",
"Specifically, some tokens in a text sequence are randomly substituted by the special [MASK] tokens and the hidden states of the [MASK] tokens are fed into a classifier to predict the original tokens.",
"Quote recommendation given context can be regarded as a special cloze task whose object of prediction is quotes rather than tokens.",
"Inspired by the MLM pre-training task, we propose another way to learn the context representation by inserting an additional [MASK] token: h c [C] , , h c [M] , = BERT c ( [C] , c l , [M] , c r ) , (3) where [M] is the [MASK] token.",
"We use the hidden state of [M] as the representation of the query context: c = h c [M] .",
"8 Calculating Rank Scores of Candidate Quotes After obtaining the representations of all candidate quotes and the query context, the rank score of a candidate quote can be calculated by softmax: p = softmax( Q c ) , (4) where p is a normalized probability vector whose i -th element is the rank score of the i -th quote.",
"As in previous work (Tan et al., 2016), we can simply use the cross-entropy loss to train the quote and context encoders simultaneously.",
"However, there are two problems.",
"(1) For each context in the training set, the quote encoder needs to be updated for every quote in the quote set.",
"In other words, the BERT-based quote encoder would be fine-tuned thousands of times per training instance, which requires formidably big GPU memory and long training time.",
"9 (2) The huge imbalance between positive and negative samples (one vs. several thousands) would weaken the capacity of the 8 The hidden state of [M] can also be regarded as the representation of the required quote for the query context.",
"In this view, the rank score in Eq.",
"(4) is actually calculated by the similarity between a candidate quote and the required quote.",
"9 We find that four 16-GB GPUs would be out of memory during training even though we set the batch size to",
"1. 340 quote encoder and, in turn, impair the final quote recommendation performance.",
"A simple solution is to freeze the quote encoder during training, i.e., use the raw pre-trained BERT as the quote encoder, and train the context encoder only.",
"But the untrained quote encoder would decrease final quote recommendation performance, as demonstrated in later experiments.",
"To address these issues, inspired by the study on noise contrastive estimation (NCE) (Gutmann and Hyvrinen, 2012), we adopt the negative sampling strategy in training.",
"For each context-quote pair, we select some non-gold quotes as negative samples, and calculate a pseudo-rank score of the gold quote among the selected quotes.",
"Formally, for a context-quote pair ( c, q ) , the pseudo-rank score of q is p = e q c e q c + (cid:80) q N ( q ) e q c , (5) where N ( q ) is the set of quotes selected as negative samples.",
"The problem about quote encoder training has been largely solved, but the context encoder may be under-trained.",
"The context encoder needs to process lots of contexts and thus requires more training than the quote encoder.",
"Therefore, we adopt a two-stage training strategy.",
"After the simultaneous training of quote and context encoders in the first stage, we continue to train the context encoder while freezing the quote encoder in the second stage.",
"The training loss of the second stage is the cross-entropy loss among all quotes.",
"Most quotes are quite pithy, and thus it is usually hard to learn their representations well.",
"To obtain better quote representations, previous work tries incorporating external information, including the topic and author information of quotes, in the quote encoder (Tan et al., 2016, 2018).",
"Although helpful, this external information is not always available or accurate quite a few quotes are anonymous, and the topics attributed to quotes are usually from crowdsourcing and uninspected.",
"We propose to incorporate sememe knowledge into quote representation learning, which is more general (every word can be annotated with sememes) and credible (the sememe annotations of words are given by experts).",
"A sememe is the minimum semantic unit of human languages (Bloom-field, 1926), and it is believed that meanings of all words can be represented by a limited set of sememes.",
"Sememe knowledge bases like HowNet (Dong and Dong, 2006) use a set of predefined sememes to annotate words, so that the meaning of a word can be precisely expressed by its sememes.",
"With the help of such sememe knowledge bases, sememes have been successfully utilized in various NLP tasks (Qi et al., 2021a), including semantic composition (Qi et al., 2019), word sense disambiguation (Hou et al., 2020), reverse dictionary (Zhang et al., 2020a), adversarial attacks (Zang et al., 2020), backdoor learning (Qi et al., 2021b), etc.",
"Inspired by the studies on incorporating sememes into recurrent neural networks (Qin et al., 2020) and transformers (Zhang et al., 2020b) to improve their representation learning ability, we adopt a similar way to incorporate sememes into the quote encoder.",
"We simply add the average embedding of a word's sememes to every token embedding of the word in BERT.",
"Formally, for a word in a quote that is divided into n tokens after tokenization w = x 1 , , x n , the embedding of its each token x i is transformed into x i x i + | S ( w ) | (cid:88) s j S ( w ) s j , i = 1 , , n (6) where S ( w ) is the sememe set of the word w , and is a hyper-parameter controlling the weight of sememe embeddings.",
"Following previous work (Qin et al., 2020), the sememe embeddings are randomly initialized and updated during training.",
"In this section, we evaluate our model and previous quote recommendation methods on QuoteR.",
"We have three groups of approaches for comparison.",
"The first group consists of two methods that widely serve as baselines in previous studies.",
"(1.1)",
"CRM , namely context-aware relevance model (He et al., 2010) that recommends the quote whose known contexts are most similar to the query context.",
"(1.2)",
"LSTM , which uses two LSTM encoders to learn representations of quotes and contexts.",
"The second group includes representative approaches proposed in previous studies.",
"(2.1) top-k RM , namely top-k rank multiplication (Ahn et al., 2016), which is a rank aggregation method based 341 Part English Standard Chinese Classical Chinese Model MRR NDCG R / R / R Recall@1/10/100 MRR NDCG R / R / R Recall@1/10/100 MRR NDCG R / R / R Recall@1/10/100 CRM 0.192 0.193 599/1169/1408 16.51/23.66/32.78 0.397 0.407 13/325/584 33.60/49.32/61.70 0.198 0.203 166/548/811 14.52/28.79/44.51 LSTM 0.321 0.320 30/334/727 27.23/40.78/62.47 0.292 0.290 48/338/574 24.78/37.71/58.06 0.247 0.245 56/341/633 20.08/33.23/56.96 top-k RM 0.422 0.431 6/548/1243 35.99/53.31/66.20 0.480 0.494 3/377/774 40.17/60.67/72.26 0.294 0.299 48/511/980 23.54/39.58/56.90 NNQR 0.318 0.319 31/359/773 26.78/41.10/61.29 0.271 0.271 54/348/595 22.94/35.72/57.18 0.272 0.270 41/310/620 22.03/36.59/60.63 N-QRM 0.365 0.368 28/777/1465 32.24/44.41/58.26 0.343 0.347 55/575/890 30.20/41.22/54.15 0.287 0.288 98/917/1373 24.88/35.02/49.49 Transform 0.561 0.568 1/241/749 50.11/65.88/79.98 0.512 0.519 2/271/576 45.50/60.31/72.83 0.449 0.453 5/269/663 39.01/55.78/73.58 BERT-Sim 0.526 0.529 2/487/1064 49.38/58.05/67.75 0.500 0.508 2/229/511 44.47/59.07/72.21 0.439 0.443 7/320/711 38.85/53.04/68.32 BERT-Cls 0.310 0.329 7/134/453 18.15/57.11/82.05 0.378 0.395 5/152/413 26.88/57.90/78.38 0.330 0.345 8/ 135 / 377 21.93/54.27/78.75 Ours 0.572 0.580 1/ 123 / 433 50.74/ 69.03 / 83.84 0.541 0.548 2/ 139 / 370 47.91 / 64.97 / 79.35 0.484 0.490 3/146/422 41.67 / 60.78 / 79.38 -Sememe 0.568 0.574 1/145/492 51.05 /67.07/82.34 0.535 0.543 2/160/402 47.62/63.66/77.68 0.475 0.481 3/152/435 40.93/60.26/78.39 -ReTrain 0.299 0.307 12/176/503 20.46/47.89/75.74 0.255 0.260 20/210/435 16.87/42.94/68.43 0.265 0.269 17/184/450 17.87/43.56/72.89 -SimTrain 0.529 0.532 2/467/1060 49.31/58.97/69.48 0.519 0.526 2/204/489 46.00/62.03/75.34 0.465 0.470 4/310/713 41.40/55.53/70.09 Table 2: Quote recommendation results on the three parts of QuoteR.",
"on the ensemble of a statistical method, random forest, CNN and LSTM.",
"(2.2)",
"NNQR (Tan et al., 2016), which reforms LSTM by incorporating additional quote information (topic and author) into the quote encoder and perturbing the word embeddings of quotes.",
"(2.3)",
"N-QRM (Tan et al., 2018), which further improves NNQR mostly by adjusting the training loss to prevent overfitting.",
"(2.4)",
"Transform (Wang et al., 2021), which uses Trans-former+GRU to encode contexts and transforms context embeddings into the space of quote embeddings learned from another Transformer.",
"10 The third group comprises two BERT-based approaches that are frequently utilized in sentence matching and sentence pair classification.",
"(3.1)",
"BERT-Sim , which is the vanilla BERT-based model discussed in 5.1.",
"It directly uses the hidden states of the [CLS] tokens as the representations of both quotes and contexts, and freezes the quote encoder during training, as explained in 5.2.",
"(3.2)",
"BERT-Cls , which conducts a binary classification for the concatenation of the query context and a candidate quote.",
"Following previous work (Ahn et al., 2016; Tan et al., 2018), we use three evaluation metrics: (1) Mean reciprocal rank ( MRR ), the average reciprocal values of the ranks of the gold quotes; (2) Normalized discounted cumulative gain ( NDCG@K ) (Jrvelin and Keklinen, 2002), a widely used measure of ranking quality and is computed by",
"where r ( i ) = 1 if the i -th quote is the gold quote, otherwise r ( i ) = 0 , ZK = 1 is a normalization constant.",
"We report the average of NDCG@5 scores of all the evaluated query contexts.",
"(3) Re-call@K , the proportion of query contexts whose gold quotes are ranked in respective top K candidate quotes, K = { 1 , 10 , 100 } .",
"Besides, we use another three evaluation metrics: (4) Median Rank ( R ), (5) Mean Rank ( R ) and (6) Rank Variance ( R ), the median, average and standard deviation of the ranks of gold quotes.",
"The higher MRR, NDCG@K and Recall@K and the lower R , R and R are, the better a model is. 6.3 Implementation Details We use BERTBASE for both English and Chinese from Transformers (Wolf et al., 2020).",
"We use the AdamW optimizer (Loshchilov and Hutter, 2018) with an initial learning rate 5e-5 that gradually declines to train our model.",
"We randomly select N negative samples, and N is tuned in {4,9,19,29,39} on the validation set.",
"The weight of sememe embeddings is tuned in {0.1, 0.2, 0.5, 1.0, 2.0}.",
"The underlined numbers are final picks.",
"For the previous methods, we use their original hyperparameters and experimental settings given in the papers.",
"Table 2 lists the evaluation results of different methods on the three parts of QuoteR.",
"We observe that (1) our method achieves the best overall results and displays its superiority to other methods; (2) the two BERT-based models, especially BERT-Sim, yield quite high performance, which reflects the importance of a powerful sentence encoder to quote recommendation; (3) among the three parts, almost all methods perform worse on Classical Chinese, 342 Part English Standard Chinese Classical Chinese Model MRR NDCG R / R / R Recall@1/10/100 MRR NDCG R / R / R Recall@1/10/100 MRR NDCG R / R / R Recall@1/10/100 CRM 0.154 0.156 353/948/1297 11.88/21.78/33.66 0.292 0.296 124/401/524 25.28/35.39/48.43 0.141 0.146 276/587/763 9.88/19.75/34.57 LSTM 0.272 0.271 89/552/992 23.38/33.87/51.12 0.210 0.208 146/483/662 18.26/27.67/45.50 0.182 0.178 117/465/750 13.87/25.44/47.80 top-k RM 0.360 0.366 30/833/1497 31.20/44.55/56.80 0.350 0.358 38/620/926 29.77/44.40/55.53 0.276 0.280 77/645/1088 22.61/36.16/52.57 NNQR 0.267 0.266 98/592/1043 22.82/33.48/50.28 0.224 0.223 145/495/683 17.16/27.67/45.81 0.189 0.187 98/441/766 14.18/26.86/50.29 N-QRM 0.270 0.272 156/1145/1735 23.40/33.18/46.54 0.266 0.270 287/778/946 21.27/30.63/42.32 0.215 0.215 356/1232/1505 17.72/27.13/40.73 Transform 0.438 0.443 6/429/1036 38.47/53.43/68.65 0.371 0.374 29/465/748 32.54/44.83/58.04 0.331 0.334 29/435/842 27.76/42.87/60.85 BERT-Sim 0.399 0.401 44/839/1407 36.95/44.75/54.32 0.364 0.370 41/431/695 31.71/44.28/56.18 0.310 0.313 56/522/902 26.32/39.05/54.56 BERT-Cls 0.265 0.275 15/ 237 / 640 16.75/45.37/71.77 0.213 0.220 24/318/646 12.47/40.53/64.67 0.204 0.208 25/253/568 11.50/38.27/66.73 Ours 0.456 0.462 4 /254/685 39.62 / 56.21 / 73.26 0.413 0.419 7 / 97 / 186 34.64 / 53.29 / 75.91 0.409 0.411 9 / 196 / 419 35.22 / 51.47 / 70.82 Table 3: Quote recommendation results on the three parts of QuoteR, given the left context only .",
"which is presumably because Chinese BERT is pre-trained on standard Chinese corpora and not suitable to encode the classical Chinese quotes.",
"We conduct ablation studies to investigate the effectiveness of our training strategy and the incorporation of sememes.",
"We first remove the incorporation of sememes (-Sememe), then further do not separately train the context encoder after the simultaneous training of the context and quote encoders (-ReTrain), and finally discard the simultaneous training of the two encoders and train the context encoder only (-SimTrain).",
"-SimTrain differs BERT-Sim only in the choice of context representation ( [MASK] vs. [CLS] ).",
"The results of ablation studies are given in the last three rows of Table",
"2. We have the following observations: (1) -Sememe causes consistent performance decline as compared to Ours, which demonstrates the role of sememes in improving quote encoding, thereby benefiting quote recommendation; (2) the performance of -ReTrain is pretty poor, which reflects the necessity of separate training for the context encoder after simultaneous training; (3) -SimTrain is inferior to -Sememe, which displays the usefulness of simultaneously training the two encoders; (4) -SimTrain outperforms BERT-Sim, proving the superiority of choosing [MASK] to represent contexts in our method.",
"Following previous work (Tan et al., 2015; Ahn et al., 2016; Tan et al., 2018), the evaluation experiments are mainly conducted in the setting where both the left and right contexts are given.",
"However, in practical terms, quote recommendation given the left context only might be more useful.",
"Therefore, we also conduct experiments in the setting where only the left context is given.",
"Table 3 shows the 0 1 (1,5] (5,10] (10,20] (20,50] (50,100] (100,150] Number of Contexts in the Training Set 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 MRR / NDCG @ 5 MRR NDCG@5 Figure 2: Recommendation performance for quotes within different occurrence frequency ranges.",
"results.",
"We can see that our method is still the best one on all three parts.",
"In addition, the performance of all methods decreases substantially, which indicates that both the left and right contexts provide important information for quote recommendation.",
"In this subsection, we investigate the effect of the gold quote's occurrence frequency on recommendation performance.",
"Figure 2 shows MRR and NDCG@5 results for quotes that have different numbers of contexts in the training set of the standard Chinese part.",
"We observe that the occurrence frequency has great impact on quote recommendation performance.",
"Basically, increasing occurrences of quotes in the training set can increase recommendation performance, because we can learn better representations for the quotes with more adequate training.",
"But the most frequent quotes does not have the best performance, possibly because these quotes carry very rich semantics and can be cited in various contexts , which makes it very hard to correctly recommend them.",
"In addition, the performance for the unseen quotes is very limited.",
"It reflects the 343 #NS MRR NDCG R / R / R Recall@1/10/100 4 0.533 0.540 2 / 161 / 412 47.48 / 63.23 / 77.68 9 0.534 0.541 2 / 148 / 381 47.50 / 63.97 / 78.83 19 0.541 0.548 2 / 139 / 370 47.91 / 64.97 / 79.35 29 0.545 0.552 2 / 174 / 434 47.06 / 63.58 / 76.92 39 0.535 0.543 2 / 132 / 357 47.17 / 64.97 / 79.43 Table 4: Quote recommendation results with different negative sample numbers (#NS).",
"In this subsection, we investigate the effect of the negative sample number (#NS), a hyper-parameter of our method, on quote recommendation performance.",
"Table 4 gives the results of different negative sample numbers on the validation set of the standard Chinese part of QuoteR.",
"We can see that increasing negative samples (from 4 to 19) can increase quote recommendation performance, which is because the quote encoder can be trained more sufficiently.",
"However, when the negative samples continue increasing, the performance fluctuates or even decreases.",
"That is possibly because of the imbalance of positive and negative samples (there is only one positive sample, namely the gold quote), as explained in 5.2.",
"Therefore, taking both performance and computation efficiency into consideration, we choose 19 as the final negative sample number.",
"As mentioned in 3, there may be other quotes that are suitable for a query context besides the gold quote.",
"Hence, we conduct a human evaluation on the recommendation results of our method.",
"We randomly select 50 contexts from the validation set of the standard Chinese part and list the top 10 quotes recommended by our method for each context.",
"Then we ask annotators to make a binary suitability decision for the quotes.",
"Each quote is annotated by 3 native speakers and the final decision is made by voting.",
"For each context, we regard the suitable quote with the highest ranking as the gold quote, and re-evaluate the recommendation performance: NDCG@5=0.661, Recall@1/10=0.50/0.92.",
"11 In contrast, the original evaluation results among the 50 contexts are 11 Since we only annotate the top 10 results, there are no other available metrics than NDCG@5 and Recall@1/10.",
"NDCG@5=0.439 , Recall@1/10=0.36/0.64.",
"By comparison, we can conclude that the real performance of our method is substantially underestimated.",
"We also count the average number of suitable quotes among the top 10 quotes, which is 1.76.",
"We feed the context in Figure 1 into our model, and print the top 5 recommended quotes and their rank scores in Table",
"5. We find that the gold quote is ranked second, but the first one is actually another statement version of the gold quote and has exactly the same meaning.",
"In addition, the third and fourth quotes are also related to the context.",
"This case, together with more cases in Appendix B, can demonstrate the practical effectiveness and usefulness of our model.",
"In this paper, we build a large and the first open dataset of quote recommendation for writing named QuoteR and conduct an extensive evaluation of existing quote recommendation methods on it.",
"We also propose a new model that achieves absolute outperformance over previous methods, and its effectiveness is proved by ablation studies.",
"In the future, we will try to improve our model in handling classical Chinese quotes by using a special classical Chinese pre-trained model to encode them.",
"We will also consider boosting the performance of our model in the few-shot and zero-shot situations.",
"This work is supported by the National Key R&D Program of China (No. 2020AAA0106502), Institute Guo Qiang at Tsinghua University, and International Innovation Center of Tsinghua University, Shanghai, China.",
"We also thank all the anonymous reviewers for their valuable comments and suggestions.",
"Dataset and Human Evaluation In terms of our QuoteR dataset, all the quotes are collected from free and open quote repository websites.",
"Besides, all the contexts are extracted from open corpora, including free public domain e-books and other open corpora.",
"Therefore, there is no intellectual property problem for the dataset.",
"In addition, we conduct the human evaluation by a reputable data annotation company.",
"The annotators are fairly compensated by the company, based on the previous annotation tasks.",
"Further, we do not directly communicate with the annotators, so that their privacy is well preserved.",
"Finally, the dataset and the human evaluation are not sensitive and thus do not need to be approved by the institutional review board (IRB).",
"Application Quote recommendation is a practical task and our model can be put into service.",
"In actual use cases, users just need to input a query context and our model should output a list of candidate quotes that fit the given context.",
"All people may benefit from our model during writing.",
"If our model fails, some inappropriate quotes that cannot fit the query context would be output, but no one would be harmed.",
"There are indeed biases in the dataset we build.",
"Some quotes are very frequent while the others are not, as illustrated in 6.6.",
"The infrequent quotes are less recommended and may cause the failure of our model in some cases.",
"In terms of misuse, to the best of our knowledge, such a quote recommendation model is hardly misused.",
"After the deployment of our model, the system would not collect data from users.",
"It does not have any potential harm to vulnerable populations, either.",
"Energy Saving To save energy, we use the base version of BERT rather than larger pre-trained language models, although the larger ones would probably yield better performance.",
"Besides, as discussed in 5.2, we find that the simultaneous training of the context and quote encoders requires very big memory and computation resources, and thus we adopt the strategy of negative sampling in training.",
"Use of Identity Characteristics In this work, we do not use any demographic or identity characteristics information."
] | [
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"abstain",
"method",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"other",
"result",
"abstain",
"method",
"method",
"method",
"result",
"result",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method"
] |
[
"Knowledge distillation (KD) is the preliminary step for training non-autoregressive translation (NAT) models, which eases the training of NAT models at the cost of losing important information for translating low-frequency words.",
"In this work, we provide an appealing alternative for NAT monolingual KD , which trains NAT student on external monolingual data with AT teacher trained on the original bilingual data.",
"Monolingual KD is able to transfer both the knowledge of the original bilingual data (implicitly encoded in the trained AT teacher model) and that of the new monolingual data to the NAT student model.",
"Extensive experiments on eight WMT benchmarks over two advanced NAT models show that monolingual KD consistently outperforms the standard KD by improving low-frequency word translation, without introducing any computational cost.",
"Monolingual KD enjoys desirable expandability, which can be further enhanced (when given more computational budget) by combining with the standard KD, a reverse monolingual KD, or enlarging the scale of monolingual data.",
"Extensive analyses demonstrate that these techniques can be used together profitably to further recall the useful information lost in the standard KD.",
"Encouragingly, combining with standard KD, our approach achieves 30.4 and 34.1 BLEU points on the WMT14 English-German and German-English datasets, respectively.",
"Our code and trained models are freely available at https://github.com/ alphadl/RLFW-NAT.mono .",
"Non-autoregressive translation (NAT, Gu et al. 2018) has been proposed to improve the decoding efficiency by predicting all tokens independently",
"and simultaneously.",
"However, the independence assumption prevents a model from properly capturing the highly multimodal distribution of target translations.",
"In response to this problem, a sequence-level knowledge distillation (KD, Kim and Rush 2016) becomes the preliminary step for training NAT models, which produces more deterministic knowledge by reducing the translation modes of the bilingual data (Zhou et al., 2020).",
"Although the standard KD on original bilingual data eases the training of NAT models, distillation may lose some important information in the raw training data, leading to more errors on predicting low-frequency words (Ding et al., 2021c,b).",
"To remedy this problem, Ding et al. (2021c) augmented NAT models the ability to learn lost knowledge from the raw bilingual data with an additional objective, and Ding et al. (2021b) first pre-trained NAT models on the raw training data and then fine-tuned them on the distilled training data.",
"While previous studies mainly focus on recalling the lost information during the distillation of the original bilingual data, in this work we propose to improve the prediction of low-frequency words by redistributing them in the external monolingual data, which has the great potential to complement the original bilingual data on the word distribution.",
"Specifically, we leverage the monolingual data to perform KD ( monolingual KD , 2.2), and train the NAT student model on the distilled monolingual data (Figure 1b).",
"Monolingual KD provides appealing benefits.",
"Firstly, the monolingual data and bilingual data in machine translation are generally complementary to each other (Zhang and Zong, 2016; Wu et al., 2019; Zhou and Keung, 2020; Sid-dhant et al., 2020; Jiao et al., 2021).",
"Accordingly, monolingual KD is able to transfer both the knowledge of the bilingual data (implicitly encoded in the trained teacher model) and that of the monolingual data to the NAT student, without introducing additional computational cost.",
"Secondly, the amount 2417 of available monolingual data is several orders of magnitude larger than that of bilingual data, which offers monolingual KD the potential to further improve translation performance by exploiting more monolingual data.",
"Furthermore, we analyze the bilingual links in the bilingual and monolingual distilled data from two alignment directions (i.e. source-to-target and target-to-source).",
"We found that the monolingual KD makes low-frequency source words aligned with targets more deterministically compared to bilingual KD, but both of them fail to align low-frequency words from target to source due to information loss.",
"Starting from this finding, we propose reverse monolingual KD to recall more alignments for low-frequency target words.",
"We then concatenate two kinds of monolingual distilled data ( bidirectional monolingual KD , 2.3) to maintain advantages of deterministic knowledge and low-frequency information.",
"We validated our approach on several translation benchmarks across scales (WMT14 En De, WMT16 Ro En, WMT17 Zh En, and WMT19 En De) over two advanced NAT models: Mask Predict (Ghazvininejad et al., 2019) and Levenshtein (Gu et al., 2019).",
"Experiments demonstrate the effectiveness and universality of our approach.",
"Specifically, we have the following findings: Monolingual KD achieves better performance than the standard KD in all cases, and the proposed bidirectional monolingual KD can further improve performance by a large margin.",
"Monolingual KD enjoys appealing expandability: enlarging the scale of monolingual data consistently improves performance until reaching the bottleneck of model capacity.",
"Monolingual KD is complementary to the standard KD, and combining them obtains further improvement by alleviating two key issues of NAT, i.e., the multimodality problem and the low-frequency word translation problem.",
"The paper is an early step in exploring monolingual KD for NAT, which can narrow the performance gap between NAT models and the SOTA AT models.",
"We hope the promising effect of monolingual KD on NAT can draw more interest and can make NAT a common translation framework.",
"Non-Autoregressive Translation Recent years have seen a surge of interest in NAT (Gu et al., 2018), which can improve the decoding efficiency by predicting all tokens independently and simultaneously.",
"Specifically, the probability of generating a target sentence y by given the source sentence x is computed as p ( y | x ) = p L ( T | x ; ) (cid:81) Tt =1 p ( y t | x ; ) , where T is the length of y , which is predicted by a separate conditional distribution p L ( ) .",
"The parameters are trained to maximize the likelihood of a set of training examples according to L ( ) = arg max log p ( y | x ; ) .",
"The conditional independence assumption prevents an NAT model from properly capturing the highly multimodal distribution of target translations ( multimodality problem , Gu et al., 2018).",
"As a result, the translation quality of NAT models often lags behind that of AT models (Vaswani et al., 2017).",
"Standard Knowledge Distillation Knowledge distillation is the preliminary step for training NAT models by reducing the modes in the original bilingual data, which makes NAT easily acquire more deterministic knowledge and achieve significant improvement (Zhou et al., 2020).",
"Typically, a sequence-level KD (Kim and Rush, 2016) is employed for NAT training, as shown in Figure 1a.",
"Different Distributions of Source Words To empirically reveal the difference on word distribution between bilingual and monolingual data, we visualize the overall word distributions, as plotted in Figure 2.",
"We can observe the significant difference between bilingual and monolingual data in the low-frequency part, which indicates that the words that occur less in the bilingual data are not necessarily low-frequent in the external monolingual data.",
"Starting from the observation, we propose to exploit external monolingual data to offer more useful information for predicting low-frequent words in bilingual data, which are generally lost in the standard knowledge distillation.",
"Our Approach Researches and competitions have shown that fully exploiting the monolingual data is at the core of achieving better generalization and accuracy for MT systems (Sennrich et al., 2016a; Zhang and Zong, 2016; Barrault et al., 2418 src tgt NAT Student Parallel Data Synthetic Data src tgt AT Teacher train src distill train NAT Student AT Teacher train Monolingual Data reuse src tgt Parallel Data train src tgt Synthetic Data src distill",
"2020).",
"In this work we want to transfer the distribution of lost information, e.g. low-frequency words, from monolingual data to the NAT training.",
"Figure 1b shows the pipeline of our proposed Monolingual KD for NAT, which differs from the Standard KD at how to construct the distilled data.",
"Instead of reusing the source side of the original bilingual data, monolingual KD performs distillation on newly monolingual data, which eliminates the dependency on the original training data.",
"Intuitively, the monolingual KD can embed both the knowledge of the original bilingual data (im-plicitly encoded in the trained teacher model) and that of the newly introduced monolingual data.",
"The comprehensive experiments in the following section provide empirical support for our hypothesis.",
"In addition, the complementarity between the bilingual and monolingual data makes explicitly combining Standard KD and Monlingual KD can further improve model performance.",
"Recalling Low-Frequency Target Words KD simplifies the training data by replacing low-frequency target words with high-frequency ones (Zhou et al., 2020; Ding et al., 2021c).",
"This is able to facilitate easier aligning source words to target ones, resulting in high bilingual coverage (Jiao et al., 2020).",
"Inspired by the low-frequency word (LFW) links analysis (Ding et al., 2021b), we borrow this LFW analysis to show the necessity of leveraging both the sourceand target-side monolingual data.",
"Concretely, we follow (Ding et al., 2021b) to evaluate the links of low-frequency words aligning from source to target (s (cid:55) t) with three metrics: Recall (R) represents how many low-frequency source words can be aligned to targets; Precision (P) means how many aligned low-frequency links are correct ac-2419 cording to human evaluation.",
"F1 is the harmonic mean between precision and recall.",
"Similarly, we can analyze in an opposite direction (t (cid:55)",
"s) by considering the links of low-frequency target words.",
"Table 1 lists the results.",
"Comparing with the standard KDB , the forward monolingual KD ( KDM in Section 2.2) achieves better alignment quality of s (cid:55) t LFW links (F1: 80.9 vs. 80.5) by aligning more low-frequency source words (R: 75.1 vs. 73.4).",
"The backward monolingual KD ( KDM ) can complementarily produce better alignment of low-frequency target words (t (cid:55) s LFW links).",
"As we expected, combining the two types of distilled data ( KDM ) can produce better alignments for both low-frequency source (F1: 82.1 vs. 80.5) and target words (F1: 79.9 vs. 74.2).",
"Our Approach ( Bid. Monolingual KD ) Based on the above observations, we propose to train NAT models on bidirectional monolingual data by concatenating two kinds of distilled data.",
"Like back-translation (Edunov et al., 2018), the reverse monolingual distillation KDM is to synthesize the source sentences by a backward AT teacher, which is trained in the reverse direction of the original bilingual data.",
"The mixture of the source-original and target-original synthetic datasets (i.e. KDM ) is used to train the final NAT model.",
"We expect that the better alignments of LFW links can lead to overall improvement of translation performance.",
"Bilingual Data We conducted experiments on two widely-used NAT benchmarks: WMT14 English-German and WMT16 English-Romanian tasks, which consist of 4.5M and 0.6M sentence pairs respectively.",
"To prove the universality of our approach on large-scale data, we also validated on WMT17 English-Chinese and WMT19 English-German tasks, which consist of 20.6M and 36.8M sentence pairs respectively.",
"We shared the source and target vocabularies, except for En Zh data.",
"We split the training data into subword units using byte pair encoding (BPE) (Sennrich et al., 2016b) with 32K merge operations, forming a vocabulary of 37k, 32k, 33k/48k and 44k for WMT14 En De, WMT16 En Ro, WMT17 En Zh and WMT19 En De respectively.",
"We used case-sensitive token-BLEU (Papineni et al., 2002) to measure the translation quality (except for En-Zh, we used sacre-T a s k Lang.",
"Monolingual Data We closely followed previous works to randomly sample monolingual data from publicly available News Crawl corpus 1 for the WMT tasks (Sennrich et al., 2016a; Wu et al., 2019).",
"We randomly sampled English and German data from News Crawl 2007 2020, and randomly sampled Romanian data from News Crawl 2015.",
"For Chinese monolingual data, we used News Crawl 2008 2020, News Commendary v16 and XMU data.",
"For fair comparison, the monolingual data generally has the same size as corresponding bilingual data, as listed in Table 2.",
"two state-of-the-art NAT models: MaskPredict [MaskT, Ghazvininejad et al. 2019] that uses the conditional masked language model (Devlin et al., 2019) to iteratively generate the target sequence from the masked input.",
"We followed its optimal settings to keep the iteration number be 10 and length beam be 5.",
"Levenshtein Transformer [LevT, Gu et al. 2019] that introduces three steps: deletion, placeholder prediction and token prediction, and the decoding iterations adaptively depends on certain conditions.",
"We followed their setting and reproduced their reported results.",
"We trained both BASE and BIG Transformer (Vaswani et al., 2017) as the AT teachers for both standard and monolingual KD.",
"For BIG models, we adopted large-batch training (i.e. 458K to-1 http://data.statmt.org/news-crawl 2420 Data MaskT LevT BLEU (cid:52) BLEU (cid:52) KDB 25.4 25.6 KDM 25.8 +0.4 26.2 +0.6 KDM 24.9 -0.5 24.5 -1.1 KDM 26.6 +1.2 26.7 +1.1 KDM + KDB 26.7 +1.3 26.8 +1.2 KDM + KDB 26.6 +1.2 26.5 +0.9 KDM + KDB 27.1 +1.7 27.3 +1.7 Table 3: BLEU scores of different monolingual distillation strategies.",
"kens/batch) to optimize the performance (Ott et al., 2018).",
"The En Ro tasks employed Transformer-B ASE as the teacher, and the other tasks used Transformer-B IG as the teacher.",
"We also used large-batch (i.e. 480K tokens/batch) to train NAT models with Adam optimizer (Kingma and Ba, 2015).",
"The learning rate warms up to 1 10 7 for 10K steps, and then decays for 60k steps with the cosine schedule (Ro En models only need 4K and 21K steps, respectively).",
"Following the common practices (Ghazvininejad et al., 2019; Kasai et al., 2020), we evaluate the performance on an ensemble of 5 best checkpoints (ranked by validation BLEU) to avoid stochasticity.",
"In this section, we evaluated the impact of different components of the monolingual KD on WMT14 En-De validation sets.",
"Impact of Distillation Strategy Table 3 lists the results of different distillation strategies.",
"The forward monolingual KD ( KDM ) consistently outperforms its standard counterpart ( KDB ) (i.e. 25.8 vs. 25.4, and 26.2 vs. 25.6), which we attribute to the advantage of monolingual KD on exploiting both the original bilingual data knowledge (implicitly encoded in the trained AT teacher model) and the new monolingual data knowledge.",
"Concatenating forwardand reverse-KD ( KDM ) can further improve the NAT performance, which is consistent with the findings in Table 1.",
"is complementary to standard KD (i.e. + KDB column).",
"As seen, standard KD consistently improves translation performance across monolingual KD variants.",
"Another interesting finding is that although reverse monolingual KD ( KDM ) significantly underperforms its forward counterpart ( KDM ) when used alone, they achieve comparable performance when using together with standard KD.",
"We discuss in details how the two KD models complement each other in Section 3.4.",
"Impact of Monolingual Data Sampling Some researchers may doubt that our approach heavily depends on the sampled monolingual data.",
"To dispel the doubt, we investigated whether our model is robust to the selected monolingual data by varying the sampling strategies.",
"Specifically, we conducted experiments on the full set of monolingual data from News Crawl 2007 2020, which consist of 243M English and 351M German sentences.",
"We compared with two representative approaches that sampled data with different priors: (1) LOWFREQ samples difficult examples containing low-frequency words (Fadaee and Monz, 2018); (2) LM-SEL selects high quality examples with language model (Moore and Lewis, 2010).",
"As listed in Table 4, the difference of three sampling strategies w.r.t BLEU is not significant under the significance test p < 0 .",
"05 (Collins et al., 2005), demonstrating that our approach is robust to the monolingual data sampling .",
"For the simplicity and robust applicability of our approach across different scenarios, we used RANDOM sampling as the default strategy in the following experiments.",
"NAT Benchmarks Table 5 lists the results on the WMT14 En De and WMT16 En Ro benchmarks.",
"Encouragingly, the conclusions in Section 3.2 hold across language pairs, demonstrating the effectiveness and universality of our approach.",
"We also compared the performance against several 2421 Model Iter.",
"previous competitive NAT models.",
"Although the results are not directly comparable since we used additional monolingual data, our approach improves previous SOTA BLEU on the NAT benchmarks.",
"Notably, our data-level approaches neither modify model architecture nor add extra training loss, thus does not increase any latency (Speed), maintaining the intrinsic advantages of NAT models.",
"The main side-effect of our approach is the increased training time for training an additional AT teacher model to build distilled data in the reverse direction.",
"Fortunately, we can eliminate the side-effect by using only the monolingual KD (Mono. KD), which still consistently outperforms the standard KD without introducing any computation cost.",
"Larger-Scale WMT Benchmarks To verify the effectiveness of our method across different data sizes, we further experimented on two widely-used large-scale MT benchmarks, i.e. WMT17 En Zh and WMT19 En De.",
"As listed in Table 6, our bidi-Model En-Zh En-De AT Teacher 35.6 24.6 40.2 40.1 MaskT +Stand.",
"rectional monolingual KD outperforms standard KD by averagely +1.9 and +2.3 BLEU points on En Zh and En De datasets, respectively, demonstrating the robustness and effectiveness of our monolingual KD approach.",
"By combining with standard KD, our methods can achieve further +1.8 and +0.9 BLEU improvements.",
"In this section, we provide some insights into how monolingual KD works.",
"We report the results on WMT14 En-De data using Mask-Predict.",
"Monolingual KD Reduces Complexity of Training Data by Improving Low-Frequency Word Alignment We first present data-level qualitative analyses to study how monolingual KD complements bilingual KD.",
"Zhou et al. (2020) revealed that standard KD improves NAT models by reducing the complexity of original bilingual data.",
"Along this thread, we used the data complexity metric to measure different distilled datasets.",
"Formally, the translation uncertainty of a source sentence x can be operationalized as conditional entropy: H ( Y | X = x) = (cid:88) y Y p (y | x) log p (y | x) T x (cid:88) t =1 H ( y | x = x t ) , where T x denotes the length of the source sentence, x and y represent a word in the source and target vocabularies, respectively.",
"We run fast-align on each parallel corpus to obtain word alignment.",
"For fair comparison, we sampled the subsets (i.e. 4.5M) of KDM and KDM + KDB to perform complexity computation.",
"As seen in Table 7, standard KD significantly reduces the data complexity compared to that of the Data WMT14 En-De WMT14 De-En H M L H M L AT Teacher Raw Data 84.7 80.2 73.0 85.4 81.1 74.2 NAT Student KDB 82.4 78.2 68.4 83.7 79.6 69.9 KDM 82.9 78.4 69.5 83.9 80.1 71.2 + KDB 83.1 78.7 70.8 84.3 80.5 72.1 KDM 84.1 79.1 72.7 85.0 80.9 73.4 + KDB 84.6 79.7 73.6 85.2 81.4 75.2 Table 8: Accuracy of word translation.",
"bilingual data (1.95 vs. 3.67), and monolingual KD reduces even more data complexity.",
"Additionally, the data complexity can be further reduced by combining with standard KD.",
"Monolingual KD Mainly Improves Low-Frequency Word Translation We first followed Ding et al. (2021c) to measure the translation accuracy of words with different frequencies, as shown in Table 8.",
"The improvements over low-frequency words are the major reason for the performance gains, where the monolingual KD and bidirectional monolingual KD outperform the standard KD by averagely +1.2% and +3.9%, respectively.",
"These findings confirm our hypothesis that monolingual KD can improve the translation of low-frequency words by redistributing them in the new monolingual data.",
"Combining with standard KD can further improve the accuracy of translating low-frequency words, which reconfirms our hypothesis on the complementarity between the two KD methods on low-frequency words.",
"In this section, we provide some potential directions to further improve NAT performance by making the most of monolingual data.",
"Exploiting Monolingual Data at Scale One strength of monolingual KD is the potential to exploit more monolingual data to further improve translation performance.",
"To validate our claim, we scaled the size of monolingual data by { 2 , 5 , 10 }, which are randomly sampled from the full set of monolingual data.",
"As shown in Table 9, 2423 Mono WMT14 En-De WMT14 De-En Size MaskT LevT MaskT LevT Bidirectional Monolingual KD 1 29.1 29.5 32.6 33.6 2 29.7 30.1 33.1 33.9 5 30.6 30.9 33.9 34.5 10 30.4 30.8 33.3 34.4 Combining with Standard KD 1 30.1 30.4 33.7 34.1 2 30.7 30.9 34.2 34.5 5 31.3 31.7 34.5 34.7 10 30.9 31.5 34.2 34.6 Table 9: BLEU scores of using monolingual data at scale.",
"enlarging the monolingual data consistently improves the BLEU scores, while this trend does not hold when further scaling the monolingual data (i.e. 10 ).",
"One possible reason is that the limited capacity of NAT-base models cannot fully exploit the large data, which suggests future exploration of larger NAT architectures.",
"Augmenting AT Teacher with Monolingual KD An alternative to exploit monolingual data is to strength the AT teacher with monolingual KD, as listed in Table 10.",
"Applying monolingual KD for AT teacher is less effective than using it for NAT training, which we attribute to the information loss when transferred from AT teacher to NAT student.",
"Applying monolingual KD to both AT teacher and NAT student can further improve the NAT performance, at the cost of more computational cost.",
"To bridge the performance gap, a number of recent efforts have explored, including model architectures (Ghazvininejad et al., 2019; Gu et al., 2019; Ding et al., 2020; Guo et al., 2020), training objectives and methods (Shao et al., 2019; Ghazvininejad et al., 2020; Ding et al., 2021a).",
"Another thread of work focus on understanding and improving distillation training for NAT (Zhou et al., 2020; Ding et al., 2021c,b; Huang et al., 2022).",
"Sequence-level KD (Kim and Rush, 2016) is a preliminary step for training NAT models to reduce the intrinsic uncertainty and learning diffi-culty (Zhou et al., 2020; Ren et al., 2020).",
"Recent studies have revealed that KD reduces the modes (i.e. multiple lexical choices for a source word) in the original data by re-weighting the training examples (Furlanello et al., 2018; Tang et al., 2020), at the cost of losing some important information, leading to more errors on predicting low-frequency words (Ding et al., 2021c).",
"In response to this problem, Ding et al. (2021b) proposed to rejuvenate low-frequency words by pretraining NAT models on the raw bilingual data.",
"In this study, we attempt to solve this problem from a different perspective rediscovering low-frequency words from external monolingual data, which can simultaneously exploit the knowledge of bilingual data (implicitly encoded in the parameters of AT teacher).",
"Closely related to our work, Zhou and Keung (2020) improved NAT models by augmenting source-side monolingual data.",
"Their work can be regarded as a special case of our approach (i.e. Mono. KD + Standard KD in Section 3.3), and our work has several more contributions.",
"Firstly, we demonstrated the effectiveness of using only monolingual KD for NAT models, which can achieve better performance than the standard KD without introducing any computational cost.",
"Secondly, we proposed a novel bidirectional monolingual KD to exploit both the source-side and target-side monolingual data.",
"Finally, we provide insights into how monolingual KD complements the standard KD.",
"In this work, we propose a simple, effective and scalable approach monolingual KD to redistribute the low-frequency words in the bilingual data using external monolingual data.",
"Monolingual KD consistently outperforms the standard KD with more translation accuracy of low-frequency words, 2424 which attribute to its strength of exploiting both the knowledge of the original bilingual data (implicitly encoded in the parameters of AT teacher) and that of the new monolingual data.",
"Monolingual KD enjoys appealing expandability, and can be further enhanced by (1) combining with a reverse monolingual KD to recall more alignments for low-frequency target words; (2) combining with the standard KD to explicitly combine both types of complementary knowledge; (3) enlarging the scale of monolingual data that is cheap to acquire.",
"Our study empirically indicates the potential to make NAT a practical translation system.",
"Future directions include designing advanced monolingual KD techniques and validating on larger-capacity NAT models (e.g. BIG setting) to strengthen the power of monolingual KD, and fully NAT models (Gu and Kong, 2021; Du et al., 2021) to show the universality of monolingual KD.",
"Besides, it will be interesting to follow Liu et al. (2021) and Wang et al. (2022) to investigate the complementarity between our monolingual KD and pretrained language models to further enhance the NAT models.",
"We are grateful to the anonymous reviewers and the area chair for their insightful comments and suggestions."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Supervised parsing models have achieved impressive results on in-domain texts.",
"However, their performances drop drastically on out-of-domain texts due to the data distribution shift.",
"The shared-private model has shown its promising advantages for alleviating this problem via feature separation, whereas prior works pay more attention to enhancing shared features but neglect the in-depth relevance of specific ones.",
"To address this issue, we for the first time apply a dynamic matching network on the shared-private model for semi-supervised cross-domain dependency parsing.",
"Meanwhile, considering the scarcity of target-domain labeled data, we leverage unlabeled data from two aspects, i.e., designing a new training strategy to improve the capability of the dynamic matching network and fine-tuning BERT to obtain domain-related contextualized representations.",
"Experiments on benchmark datasets show that our proposed model consistently outperforms various baselines, leading to new state-of-the-art results on all domains.",
"Detailed analysis on different matching strategies demonstrates that it is essential to learn suitable matching weights to emphasize useful features and ignore useless or even harmful ones.",
"Besides, our proposed model can be directly extended to multi-source domain adaptation and achieves best performances among various baselines, further verifying the effectiveness and robustness.",
"Dependency parsing aims to capture syntactic and semantic information over input words via a dependency tree.",
"As depicted in Figure 1, given an input sentence s = w 0 w 1 . . . w n , a dependency tree is defined as d = { ( h, m, l ) , 0 h n, 1 m n, l L} , where ( h, m, l ) is a dependency from the head word w h to the modifier word w m with the relation label l L .",
"Recently, supervised neural models have achieved significant improvements in dependency parsing (Chen and Manning, $",
"2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017; Li et al., 2019a).",
"Particularly, Dozat and Manning (2017) propose a BiAffine parser and achieve good results on various languages.",
"In order to obtain better performance, supervised parsing models rely on sufficient in-domain training data.",
"However, the parsing accuracy degrades significantly when the training data is from out-of-domain that has a large gap between the in-domain data.The main reason can be attributed to different feature distributions between source and target domains.",
"Thus modeling the relevance of these distributions becomes the key challenge for cross-domain dependency parsing.",
"In the past few years, semi-supervised dependency parsing has attracted more attention with the surge of labeled web data that are user-generated non-canonical texts (Yu et al., 2013; Peng et al., 2019; Li et al., 2019b; Dakota et al., 2021).",
"As shown in Figure 2, these approaches for modeling the similarity and discrepancy among different domains can be classified into three categories.",
"The fully-shared model treats source and target domains equally and shares all model parameters, which may extract domain-invariant features but fail to capture domain-specific ones.",
"In contrast, the fully-private model exploits completely independent encoders for each domain, which can better capture domain-specific features but ignore domain-invariant ones.",
"To combine the advantages 1035 BiLSTM (sha) x src x tgt MLP BiAffine",
"of fully-shared and fully-private models, the shared-private model naturally separates domain-invariant and domain-specific features via shared and private encoders (Daum III, 2007; Kim et al., 2016).",
"However, this model still has two issues, i.e., neglecting the in-depth relevance of specific ones and failing to utilize unlabeled data effectively.",
"For the first issue, we investigate feature transfer approaches that encourage the target feature space to learn useful knowledge from source domain (Zagoruyko and Komodakis, 2017; Jang et al., 2019; Wright and Augenstein, 2020; Li et al., 2020a).",
"Particularly, Jang et al. (2019) successfully use meta-learning to learn transfer weights between heterogeneous architectures and tasks.",
"Motivated by this work, we propose a dynamic matching network based on the shared-private model for semi-supervised dependency parsing.",
"Concretely, our model automatically generates matching weights to emphasize useful information and filter useless or even harmful features, thus further improving the power of target feature space.",
"For the second issue, considering that manually annotating samples for a new domain is time-consuming and expensive, we endeavour to effectively utilize target-domain unlabeled data.",
"We design a new training strategy to use unlabeled data to enhance the power of matching network, thus modeling more effective specific features for the target domain.",
"Meanwhile, we fine-tune BERT model with language model loss to obtain more reliable domain-related contextualized representations.",
"Experiments on benchmark datasets show that our proposed model outperforms the top submitted system in the NLPCC-2019 shared task (Li et al., 2019c), leading to new state-of-the-art results on all domains.",
"In addition, detailed analysis on different matching settings reveals insights on the effect of intermediate source features.",
"The extension on multi-source domain adaptation further verifies the effectiveness and robustness of our model.",
"The code is released at https://github.com/ suda-yingli/ACL2022-match to facilitate future research.",
"In this work, we use the simple yet effective BiAffine parser (Dozat and Manning, 2017) as our basic model, which consists of four components, i.e., Input Layer , BiLSTM Encoder , MLPs (multi-layer perceptron) , and BiAffines .",
"Inputs.",
"Each input word w i is mapped into a dense vector x i .",
"The vector is the concatenation of pre-trained word embedding emb word i and its Chinese character representation rep char i , x i = emb word i rep char i (1) where rep char i is generated by using one-layer BiLSTM to encode the characters of word w i (Lam-ple et al., 2016).",
"In addition, we also use BERT representations to enhance our baseline where emb word i is substituted by rep BERT i simply.",
"BiLSTM.",
"A three-layer BiLSTM is applied to sequentially encode the input vectors x 0 x 1 . . . x n in two independent directions (forward and back-ward), and generates context-aware word representations h 0 h 1 . . . h n via combining the outputs of both directions.",
"MLPs.",
"Two separate MLPs are used to obtain syntax-related lower-dimensional vectors.",
"i i or a dependent word.",
"BiAffines.",
"The score of a dependency i j is obtained via BiAffine attention, score ( i j ) = r H j U 1 r D i + r H j U 2 (3) where U 1 and U 2 are parameters.",
"After obtaining the scores, the parser finds the highest-scoring tree with the dynamic programming algorithm known as maximum spanning tree (McDonald et al., 2005).",
"Then, the classification of dependency labels is treated as a separate task, and the arc-factorization score is computed as follows: score ( i l j ) = r H j U 3 r D i +( r H j r D i ) U 4 + b (4) where U 3 , U 4 , and b are parameters, and l is the relation label.",
"Parsing loss.",
"During training, the parser computes two independent cross-entropy losses for each position, i.e., maximizing the probability of its correct head and the correct label between them.",
"where w j is the gold-standard head of w i , and l is the corresponding gold relation label.",
"Semi-supervised dependency parsing aims at learning a parser that generalizes well to the target domain.",
"Although supervised parser has achieved good results on in-domain data, the parsing performance drops dramatically when the training data is mainly from the out-of-domain.",
"The shared-private model has been proven effective for alleviating this problem.",
"However, the model ignores the in-depth relevance of specific ones and fails to directly use unlabeled data for model training.",
"To address these problems, we for the first time apply a dynamic matching network on the shared-private model to learn appropriate matching weights automatically via mimicking well-trained source features.",
"As shown in Figure 3, our model mainly contains two components, i.e., a shared-private schema for feature separation and a dynamic matching network BiLSTM (sha) BiLSTM (tgt) BiLSTM (src) x tgt x src x tgt x tgt x src MLP BiAffine Dynamic Matching Matching Loss Figure 3: The framework of our proposed model.",
"for capturing the relevance of domain-specific features.",
"In addition, we propose a new strategy for our model training to make full use of all labeled and unlabeled data.",
"The framework of vanilla shared-private model is shown in Figure",
"2(c).",
"First, each input word is encoded by the shared BiLSTM and its private BiLSTM to obtain domain-invariant and domain-specific representations.",
"Then, the two representations are combined as the final context-aware representation h i , which is fed into shared MLPs to obtain syntax-related information.",
"Next, we obtain the scores of dependency arcs and labels via shared BiAffines.",
"Finally, all model parameters are updated via minimizing the parsing loss.",
"Orthogonality constraints.",
"Although the shared-private model has separated domain-invariant and domain-specific features via the shared and private encoders, the two type features may interfere with each other.",
"To alleviate this problem, we apply orthogonality constraints to encourage the domain-specific features to be mutually exclusive with the shared ones.",
"Following Bousmalis et al. (2016), we define the loss of orthogonality constraints as follows: L ort = (cid:40)(cid:80) ni =0 (cid:13)(cid:13) ( h i ) T s i (cid:13)(cid:13) , if w i { src } (cid:80) ni =0 (cid:13)(cid:13) ( h i ) T t i (cid:13)(cid:13) , if w i { tgt } (6) where h i is the output of shared BiLSTM, s i and t i are the outputs of source-domain and target-domain private BiLSTMs.",
"In practical application, some source features are more important than others while some are irrelevant or even harmful depending on the domain",
"differences.",
"Hence, directly neglecting source features seems especially profligate.",
"Motivated by Jang et al. (2019), we for the first time apply a dynamic matching network on the shared-private model to learn matching weights automatically.",
"Thus the model is able to pay more attention on useful source features and ignore the useless ones.",
"Considering the source features are well-trained with sufficient labeled data, mimicking these features may be helpful for enhancing the power of the target feature representational space.",
"Hence, we minimize l 2 objection to transfer the knowledge from source features to the target ones: || f ( t mi ) s ni || 22 (7) where f ( ) is a linear transformation, t mi is the m th -layer output of target-domain private BiLSTM, and s ni is the n th -layer output of source-domain private BiLSTM.",
"As shown in Figure 4, the key of dynamic matching network is learning layer matching weights W and element matching weights Q .",
"Layer matching weights W .",
"Intuitively, each intermediate feature of source domain has a completely different effect on the target domain.",
"When we exploit the matching network to learn useful information from the source domain, a key problem is to decide the layer matching pair ( n, m ) .",
"Previous works select the matching pair based on prior knowledge of architectures or semantic similarities between tasks (Romero et al., 2015; Zagoruyko and Komodakis, 2017).",
"To reduce the complexities of matching pair selection, we use a learnable layer matching weight W ( n,m ) 0 for each pair ( n, m ) which can decide the amount of feature matching between the n th -layer outputs of source-domain private BiLSTM and the m th -layer outputs of target-domain private BiLSTM.",
"where the Relu function is used to ensure non-negativeness of W .",
"Element matching weights Q .",
"After obtaining layer matching weight W n,m , we need to learn element matching weight Q n,md to emphasize the useful intermediate elements according to their utility on the target domain.",
"The matching loss of matching pair ( n, m ) is L n,m mat = W n,m 1 DD (cid:88) d =1 Q n,md ( f ( t mi ) s ni ) 2 d (9) where D is the dimension of the BiLSTM output.",
"Q n,md is the non-negative weight of element d with (cid:80) Dd =1 Q n,md = 1 .",
"Since the important elements to transfer can vary for each input word w i , we set element transfer weights as follows: Q n,md = softmax (cid:16) r n,m ( s ni ) (cid:17) d (10) where r n,m ( ) is the linear transformation.",
"After obtaining the element and layer matching weights, the combined matching loss is L mat = 1 K (cid:88) n,m L n,m mat = 1 KD (cid:88) n,m W n,m D (cid:88) d =1 Q n,md ( f ( t mi ) s ni ) 2 d (11) where n, m { 1 , 2 , 3 } are the layer number of BiLSTM, and K = 3 3 = 9 is the number of matching pairs.",
"In order to make full use of all available training data, our model adopts a joint training strategy as shown in Algorithm 1. Here we split parameters in the model into two groups: 1) parsing parameters include all parameters of shared-private model and the linear function f ( ) ; 2) matching parameters include the parameters of all functions that generate matching weights.",
"To balance the parsing and matching tasks, we give them different loss weights and update the dynamic matching parameters with a smaller learning rate.",
"In the joint training process, minibatches of source domain and target domain take turns to train (lines 3-5 and 6-11, respectively).",
"When the minibatch is from the source domain, we update the parsing parameters with parsing and orthogonality losses.",
"When the minibatch comes from target domain, if the data is annotated, the total model ( & ) is jointly trained with parsing, orthogonality and matching losses; otherwise only matching parameters are updated with the matching loss.",
"Datasets.",
"We use the Chinese multi-domain dependency parsing datasets released at the NLPCC-2019 shared task 1 , containing four domains: one source domain which is a balanced corpus (BC) from news-wire, three target domains which are the product comments (PC) data from Taobao, the product blog (PB) data from Taobao headline, and a web fiction data named ZhuXian (ZX).",
"Table 1 shows the detailed data statistics.",
"Evaluation.",
"We use unlabeled attachment score (UAS) and labeled attachment score (LAS) to two-evaluate the dependency parsing accuracy (Hajic et al., 2009).",
"Each model is trained for at most 1 , 000 iterations, and the performance is evaluated on the dev data after each iteration for model selection.",
"We stop the training if the peak performance does not increase in 100 consecutive iterations.",
"Hyper-parameters.",
"We set the dimension of char embedding to 100.",
"We train word2vec (Mikolov et al., 2013) on Chinese Gigaword Third Edition to obtain pre-trained word embeddings.",
"To see the effect of contextualized representations, we use the released Chinese BERT-Base model 2 to 1 http://hlt.suda.edu.cn/index.php/ Nlpcc-2019-shared-task 2 https://github.com/google-research/ bert yield BERT representations for each word.",
"The averaged sum of the top four layer outputs is reduced into a dimension of 100 via an MLP.",
"The learning rate for feature matching network and loss weights and are set as 10 4 , 0 .",
"01 , and 0 .",
"01 .",
"For other hyper-parameters, we keep the default configuration in BiAffine parser (Dozat and Manning, 2017).",
"Baseline models.",
"To verify the effectiveness of our proposed model, we select the following models as our strong baselines.",
"FulSha (Fully-shared).",
"The FulSha model, shown as Figure",
"2(a), directly trains the BiAffine parser with all labeled data from source and target domains.",
"FulPri (Fully-private).",
"The FulPri model, shown as Figure",
"2(b), exploits two independent BiLSTMs to separate source and target features absolutely.",
"ShaPri (Shared-private).",
"As shown in Figure",
"2(c), the ShaPri model can combine the advantages of fully-shared and fully-private models.",
"It captures domain-invariant and domain-specific features simultaneously via utilizing two private and one shared BiLSTMs.",
"DoEmb (Domain Embedding).",
"The DoEmb model, proposed by Li et al. (2019b), has been proven effective for semi-supervised dependency parsing.",
"The key idea is to use an extra domain embedding to indicate which domain the input sentence comes from.",
"ADE (Adversarial Domain Embedding).",
"Li et al. (2020b) successfully apply adversarial learning on the Doemb model and achieve good performances on semi-supervised dependency parsing.",
"They leverage an extra domain embedding to capture domain-related information and adversarial network to extract more shared knowledge across different domains.",
"To gain more insights on the data distribution of different domains, we give a detailed analysis on our benchmark datasets from both lexical and syntactic aspects.",
"On the one hand, we calculate word distributions for each domain.",
"Figure 5 clearly shows that the same word appearing in different domains has completely different distributional probabilities.",
"For example, the distributional probabilities 1039 Figure 5: Word distributional probabilities of different domains.",
"of word are 0.46 in BC domain, 0.43 in PC domain, 0.35 in PB domain, and 0.26 in ZX domain.",
"Thus, it may inevitably lead to the shift of data distributions between different domains.",
"On the other hand, we count sentence distributions for each domain based on the punctuation of input sentences.",
"As shown in Figure 6, we find that the sentence distribution of source domain (BC) is similar to target domains (PB and ZX), but it is much different from the PC domain.",
"The main reason is that the data of PC domain is non-canonical and contains a lot of ellipsis phenomena.",
"Hence, not all source domain knowledge is equally important for the target domain, and it is necessary to automatically select the useful information from the source domain to enhance the performance on the target domain.",
"In this work, we leverage unlabeled data from two aspects: 1) learning more appropriate matching weights via enhancing the power of the dy-77",
"namic matching network; 2) obtaining more reliable domain-related word representations by fine-tuning BERT.",
"Since the amount of labeled data on target domain is much smaller than the source domain, we attempt to utilize target-domain unlabeled data to help the model to learn matching weights.",
"Figure 7 illustrates the influence of unlabeled data sizes on dev data.",
"In each curve, we fix the size of source-domain labeled data and incrementally add a random subset of target-domain unlabeled data.",
"On the one hand, enlarging the size of unlabeled data leads to consistent improvements when the ratio is less than 3/4.",
"This shows that the unlabeled data plays an important role in the matching weights learning.",
"On the other hand, we can see that the parsing performance slightly degrades when the ratio increases larger than 1, indicating that the usefulness of the unlabeled data becomes limited when the size is too large.",
"Additionally, we leverage large-scale target-domain unlabeled data to fine-tune BERT model parameters, and detailed comparative experimental results are shown in Table 2. First, we observe that the model with fine-tuned BERT consistently outperforms the one with primary BERT representations, demonstrating that fine-tuning BERT is able to learn domain-related knowledge.",
"Second, even the accuracy gap between different models reduces, our proposed model still achieves better per-1040 PC PB ZX AVG UAS LAS UAS LAS UAS LAS UAS LAS Results of previous works Yu (19) * 72.18 64.12 82.57 77.83 80.53 75.84 78.43 72.60 Peng (19) FE 73.16 64.33 83.05 78.57 82.09 77.08 79.43 73.33 Li (19) FB* 75.25 67.77 85.53 81.51 86.14 81.65 82.30 76.98 Li (20) FB 75.93 68.34 85.07 80.99 85.94 81.45 82.31 76.93 Compare with baseline models FulPri 70.02 61.43 79.60 74.74 76.56 71.05 75.39 69.07 FulSha 69.66 61.21 80.03 75.26 79.42 74.55 76.37 70.34 ShaPri 70.47 62.06 80.14 75.10 79.27 74.21 76.63 70.46 DoEmb 70.31 61.45 79.71 74.67 79.65 74.61 76.56 70.24 ADE 71.41 63.16 80.35 75.55 80.26 75.30 77.34 71.33 Our 71.91 63.88 81.24 76.61 80.44 75.58 77.86 72.03 Enhance models with BERT representations FulPri 72.75 65.08 83.96 79.64 83.08 78.48 79.93 74.40 FulSha 73.87 66.12 84.21 79.98 84.75 80.23 80.94 75.44 ShaPri 73.88 66.35 84.50 80.15 84.73 80.29 81.03 75.59 DoEmb 74.10 66.39 84.10 79.79 84.93 80.46 81.04 75.55 ADE 74.61 66.81 84.77 80.62 85.06 80.60 81.48 76.01 Our 75.24 67.36 85.38 81.21 85.87 81.54 82.16 76.71 Our FB 76.73 69.38 86.06 81.63 86.56 82.49 83.12 77.83 Table 3: Final results on test data where FE denotes model with fine-tuned ELMo, FB denotes model with fine-tuned BERT, and * denotes model ensam-ble.",
"formance than the ShaPri model, which further verifies the effectiveness of feature matching network.",
"Overall, unlabeled data is extremely helpful to enhance the feature representations that contribute for semi-supervised dependency parsing via fine-tuning BERT or enhancing the power of feature matching network.",
"Table 3 shows the final results on test data and makes a comparison with previous works.",
"First, we can see that the ShaPri model achieves better performance than FulPri and FulSha models, demonstrating that both domain-invariant and domain-specific features are helpful for semi-supervised dependency parsing.",
"More specially, the FulSha model outperforms the FulPri one on PB and ZX domains but slightly declines on PC domain, possibly because the huge divergence between source and target domains leads to the interference for shared features learning.",
"Although the ShaPri model already achieves better parsing accuracy, our model still outperforms it by 1.5% improvement in averaged LAS, indicating that the dynamic matching network is useful for enhancing the capability of target feature representational space via learning information from source domain.",
"Second, the utilization of BERT boosts all model performances by a large margin.",
"Fine-tuning BERT with unlabeled data can further enhance the model performance.",
"Even the baseline models with BERT become much stronger, our proposed model still achieves the best 77 78 ZXdata 76 77 LAS ( % ) PBdata None 1to3 2to3 3to3 1to2 2to2 3to2 1to1 2to1 3to1 62 63 MatchingPair PCdata Figure 8: Accuracy curves regarding matching pairs.",
"performance, which further demonstrates the effectiveness of our proposed model.",
"Finally, we present the remarkable results of previous works in the top block.",
"Yu et al. (2019) combine self-training and model ensemble approaches to improve the model performance.",
"Peng et al. (2019) re-implement the DoEmb model with fine-tuned ELMo using the codes released by Li et al. (2019b).",
"The top system submitted by Li et al. (2019c) joints the advantages of tri-training, model ensemble, and BERT for the model training.",
"(Li et al., 2020b) propose the ADE model and utilize fine-tuned BERT for semi-supervised dependency parsing, achieving competitive performances with the top system.",
"Our proposed single model outperforms all these baseline models, leading to new state-of-the-art results on all domains.",
"Because there still lacks related studies of feature transfer on semi-supervised dependency parsing, we for the first time design detailed comparative experiments to gain more insight on the impact of migrating the intermediate features from source to target domain.",
"Here, One-to-one means only selecting a matching pair with the matching weight 1 ; All-to-all means that all matching pairs are used with matching weights 1 ; Learned matching means that all matching pairs are used with generated matching weights by our matching network.",
"Figure 8 shows results of different One-to-one models where 1to3 means learning information from the 1 th -layer outputs of source-domain private BiLSTM to the 3 th -layer outputs of target-domain private BiLSTM.",
"First, we can see that almost all 1041 PC PB ZX AVG UAS LAS UAS LAS UAS LAS UAS LAS FulPri 68.44 59.96 79.35 74.60 74.43 70.22 74.07 68.26 FulSha 70.19 62.11 81.42 76.62 81.84 76.87 77.82 71.81 ShaPri 69.87 61.87 81.33 76.71 82.20 77.15 77.80 71.91 DoEmb 70.26 62.00 81.19 76.75 82.37 77.52 77.91 72.09 ADE 70.80 62.69 81.57 76.92 82.57 77.84 78.31 72.48 Our 71.36 63.67 82.00 77.53 82.45 78.16 78.61 73.12 Table 5: Results of multi-source domain adaptation on dev data.",
"One-to-one models outperform the None one, demonstrating that the model can learn some useful information from source domain to target domain via a simple feature matching process.",
"Second, the model achieves a slight improvement when we transform features from the source domain to the higher layer outputs of target private BiLSTM.",
"The reason may be that the higher layer outputs of BiLSTM contain much syntax-related information which has a higher domain relevance.",
"Finally, we find that different domains have different trends in the curves, so it is difficult to select an explicit matching setting that adapts all domains.",
"Table 4 presents that the best One-to-one model achieves better performances than All-to-all.",
"We suspect the reason may be that All-to-all model treats all matching pairs equally, thus may lead to potential conflicts between different matching pairs.",
"Additionally, Learned matching boosts the All-to-all performance by a large margin, indicating that our model is extremely useful for learning matching weights and alleviating the conflicts of feature transfer.",
"Overall, the results can clearly demonstrate that modeling appropriate matching weights to emphasize useful information and filter out harmful knowledge is crucial to improve the capability of domain adaptation.",
"Table 5 presents the parsing accuracy on dev data where each model is trained with multi-source domain training data.",
"For example, if the target domain is PC, its training data comes from BC, PB, and ZX domains.",
"On the one hand, we observe that the same model trained with multi-source domains slightly outperforms it trained with only one source domain.",
"The reason may be that although multi-source domains can provide more knowledge for the target domain, the data distribution shift leads to the negative transfer.",
"Therefore, using all source domains simultaneously always requires more sophisticated hand-crafted configurations of the feature transfer.",
"On the other hand, we can see that our proposed model achieves the best performance over various baselines.",
"It demonstrates that the learned matching weights are helpful for constructing the relationships between target and multiply source domains, thus further boosting the parsing accuracy of the target domain.",
"Domain adaptation generally falls into two categories: semi-supervised where large-scale labeled data for the source and small-scale labeled data for the target are available and unsupervised where only the labeled data for the source domain is given.",
"Due to the lack of target-domain labeled data, previous works focus on unsupervised domain adaptation.",
"One stream of work attempts to create pseudo training samples for the target domain via self-training, co-training, or tri-training processes (Yarowsky, 1995; Blum and Mitchell, 1998; Clark et al., 2003; Sgaard and Rishj, 2010; Yu et al., 2015; Li et al., 2019c; Saito et al., 2020).",
"As a coin has two sides, self-training has been proven effective on cross-domain constituency parsing (Mc-Closky et al., 2006) and dependency parsing (Yu et al., 2015), but Charniak (1997) reports either mi-nor improvements or significant damage for parsing by self-training.",
"Clark et al. (2003) show the same findings on POS-tagging.",
"Both Sarkar (2001) and Steedman et al. (2003) demonstrate that co-training is helpful for cross-domain dependency parsing.",
"Li et al. (2019c) successfully use tri-training and fine-tuned BERT to improve the parsing accuracy.",
"However, these approaches often require both caution and experience for selecting the appropriate pseudo samples.",
"Another stream of work focuses on learning the feature representations from multiple source domain via mixture of experts.",
"Kim et al. (2017) combine the predictions of domain experts via attention.",
"Guo et al. (2018) propose a mixture of experts which uses a point to set metric.",
"(Wright and Augenstein, 2020) extend the mixture of experts method on large pre-trained transformer models, leading to significant improvements.",
"Motivated by these works, our work attempts to learn the relationship between different domain-specific representations.",
"5.2 Semi-supervised domain adaptation In the past few years, semi-supervised domain adaptation for dependency parsing has achieved great improvements with the development of parsing communities (Chen et al., 2013; Yu et al., 2013; Li et al., 2019b; Peng et al., 2019).",
"Feature separation , as a strand work of semi-supervised domain adaptation, is first proposed by Daum III (2007) and achieves good results on sequence labeling tasks.",
"Finkel and Manning (2009) extend this method by using a hierarchical Bayesian prior.",
"Kim et al. (2016) apply it on neural-based model which uses a shared and multiple private BiLSTMs to separate domain-invariant and domain-specific features.",
"Adversarial learning is a common method to encourage the shared encoder to extract more pure domain-invariant features via cheating the domain classifier (Ganin and Lempitsky, 2015; Bousmalis et al., 2016; Cao et al., 2018).",
"Most relatively, Sato et al. (2017) apply adversarial learning on shared-private model but find slight improvements for semi-supervised dependency parsing.",
"Li et al. (2020b) also exploit adversarial learning on the shared-private and domain embedding models with two strategies and achieves better performances than no-adversarial ones.",
"Another strand work is feature transformation .",
"Ando and Zhang (2005) design a variety of auxiliary problems to learn various aspects of the target problem from unlabeled data.",
"Chen et al. (2013) propose the traditional feature transformation for dependency parsing which is similar as a way of doing feature smoothing.",
"Jang et al. (2019) utilize the meta-learning to learn transfer weights of heterogeneous networks and tasks, leading to great improvements.",
"Hu et al. (2021) propose a multi-view framework which combines multiple source models into an aggregated source view at language, sentence, or sub-structure levels.",
"However, there still lacks related researches on the neural-based model for cross-domain dependency parsing.",
"This work proposes a feature matching shared-private model for semi-supervised dependency parsing.",
"Meanwhile, we utilize unlabeled data to enhance the power of feature matching network and the BERT representations.",
"Our proposed approach achieves consistent improvements among various baseline models, leading to new state-of-the-art results on all domains.",
"The detailed analysis shows that compared with manual matching setting, the automatically learned matching weights by our designed dynamic matching network can improve the parsing accuracy.",
"Furthermore, our proposed model can be directly extended to multi-source domain adaptation and achieves the best performance among various baselines, further demonstrating the effectiveness and robustness of our proposed method.",
"We thank our anonymous reviewers for their helpful comments.",
"We are very grateful to Zhenghua Li for his careful guidance of our work.",
"We also thank Chen Gong, Qingrong Xia, Yu Zhang, Houquan Zhou, Yahui Liu, and Tong Zhu for their help in paper writing and polishing.",
"This work was supported by National Natural Science Foundation of China (Grant No. 62036004) and a project funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other"
] |
[
"This paper focuses on Seq2Seq (S2S) constrained text generation where the text generator is constrained to mention specific words, which are inputs to the encoder, in the generated outputs.",
"Pre-trained S2S models such as T5 or a Copy Mechanism can be trained to copy the surface tokens from encoders to decoders, but they cannot guarantee constraint satisfaction.",
"Constrained decoding algorithms always produce hypotheses satisfying all constraints.",
"However, they are computationally expensive and can lower the generated text quality.",
"In this paper, we propose Mention Flags ( MF ), which trace whether lexical constraints are satisfied in the generated outputs of an S2S decoder.",
"The MF models are trained to generate tokens until all constraints are satisfied, guaranteeing high constraint satisfaction.",
"Our experiments on the Common Sense Generation task ( CommonGen ) (Lin et al., 2020), End2end Data-to-Text task ( E2ENLG ) (Dusek et al., 2020) and Novel Object Captioning task ( nocaps ) (Agrawal et al., 2019) show that the MF models maintain higher constraint satisfaction and text quality than the baseline models and other constrained text generation algorithms, achieving state-of-the-art performance on all three tasks.",
"These results are achieved with a much lower run-time than constrained decoding algorithms.",
"We also show that the MF models work well in the low-resource setting.",
"1 1 Introduction This paper focuses on Seq2Seq (S2S) constrained text generation where a set of encoder input tokens are required to be present in the generated outputs.",
"For example, Keyword-to-Text (Lin et al., 2020), Data-to-Text (Gardent et al., 2017; Du sek et al., 2020) and Image-to-Text (Lin et al., 2014; 1 The source code for this paper is released at https: //github.com/GaryYufei/ACL2021MF Figure 1: An overview of the Mention Flag mechanism for Transformer-based S2S models.",
"Agrawal et al., 2019) require the models to mention all or some of the input keywords, key-value pairs and image object labels (respectively), potentially with linguistic variants, in the generated outputs.",
"Large (pre-trained) Transformer-based S2S models such as T5 (Raffel et al., 2019) can be trained (fine-tuned) to perform this task.",
"However, they only learn to copy the surface tokens from encoder inputs to the decoder outputs and there is no underlying mechanism guaranteeing good constraint satisfaction (the ratio of satisfied lexical constraints to given lexical constraints).",
"Constrained Beam Search (CBS) (Anderson et al., 2017) and related algorithms can guarantee outputs satisfying all constraints, however they are much slower than the standard beam search algorithm.",
"In addition, as they are all inference-based algorithms, their corresponding models are not aware of the constraint words or phrases, the resulting generation could be poor.",
"Ideally, a method for producing constrained text should:",
"a) generate high-quality text;",
"b) achieve high constraint satisfaction;",
"c) have an efficient inference procedure.",
"To this end, we propose Mention Flags ( MF ), which trace whether a lexical constraint has been realized in partial decoder outputs.",
"Specifically, each decoder input token is provided with a set of flags indicating which constraints have been satisfied up to that token.",
"As shown in Fig 1, the Mention Flags for flower is set from the third step, because flower is generated at the second step.",
"We represent the three possible Mention Flags as separate trainable embeddings and inject them into the decoder of the S2S Transformer-based Text generator.",
"The dynamic Mention Flags explicitly inform the model about which constraints have been satisfied, which is helpful for the models to produce high-quality text satisfying the constraints (Goal a ).",
"During training, all the mention flags are set when the model is tasked to generate the End-of-Sequence (EOS) token, strongly encouraging the model not to stop generation until all constraints are satisfied (Goal b ).",
"The MF models only require ordinary decoding algorithms.",
"Their inference time and memory requirements are similar to their baseline models (Goal c ).",
"We conduct experiments on three benchmarks: Commonsense Generative Reasoning ( CommonGen ) (Lin et al., 2020), where the only input is a set of words representing concepts, and the output text is constrained to include all of them; End-to-End Data-to-Text ( E2ENLG ) (Dusek et al., 2020), where the constraints are meaning representations with lexicalised attributes and values that the output text should mention; and Novel Object Captioning at scale ( nocaps ) (Agrawal et al., 2019), where constraints are salient image objects that should be mentioned in the generated caption.",
"Compared to the constrained decoding algorithms, the MF models can produce higher-quality text with a similar level of constraint satisfaction and much less inference run-time and memory.",
"Mention Flags are a general mechanism that improves constraint satisfaction in the non-pre-trained and pre-trained S2S Transformer-based models.",
"Furthermore, our experiments show that the MF models can satisfy novel constraints (i.e, involving words or phrases not seen during training) and they work well in low-resource settings.",
"Our MF models set a new state-of-the-art in these three tasks.",
"In this paper, we focus on constraining transformer-based text generation models due to their popularity and success in various domains, especially in large-scale pre-trained language models (Raffel et al., 2019; Lewis et al., 2020).",
"Previous work can be roughly categorized into two streams: S2S training approaches and Constrained decoding approaches: Training S2S Models S2S models can implicitly capture the co-occurrence between encoder and decoder sequences, particularly pre-trained ones such as T5 (Raffel et al., 2019) and BART (Lewis et al., 2020).",
"Wen et al. (2015) uses a special gate to control what information will be generated in the following steps.",
"Kale and Rastogi (2020) have shown that the T5 models achieve state-of-the-art results in various Data-to-Text tasks, requiring copying from encoder to decoder, after fine-tuning.",
"As an alternative, the Copy Mechanism (Gu et al., 2016) explicitly learns where to copy the input constraints into the output by adding an extra copy pathway to the models.",
"However, these approaches cannot control or guarantee their constraint satisfaction.",
"Lin et al. (2020) also have observed lower constraint satisfaction in the above methods, compared to the constrained decoding approaches.",
"Constrained Decoding These algorithms, including Constrained Beam Search (CBS) (An-derson et al., 2017) and Grid Beam Search (GBS) (Hokamp and Liu, 2017), maintain a set of states which have their own sizek beams and only allow hypotheses satisfying specific constraints to be considered during inference.",
"Each CBS state corresponds to the hypotheses satisfying different constraints (exponential in the number of constraints) and the GBS states correspond to the hypotheses satisfying the same number of constraints (linear to constraint number).",
"Balakrishnan et al. (2019); Juraska et al. (2018); Dusek and Jurccek (2016) also modify their inference algorithm in a similar way to fulfill specific output requirements.",
"However, they significantly increase the inference run-time and memory and can produce sub-optimal outputs.",
"This section first formulates constrained text generation tasks, then introduces Mention Flags and their",
"In the S2S constrained text generation tasks, we are given encoder inputs x = [ x 1 , . . . , x l x ] X that describe the task, where some x i correspond to lexical constraints that must be satisfied in the generated outputs.",
"At generation step t , the decoder takes as input the tokens generated so far y : t = [ y 1 , , y t ] Y and generates the next output token y t +1 .",
"At generation step t , a set of Mention Flags indicates whether each lexical constraint has been satisfied up to this step (i.e., in the decoder input sequence y : t ).",
"Formally, they can be defined as m : X Y { 0 , 1 , 2 } l x where | m( x , y : t ) | = | x | .",
"Specifically, Mention Flag m( x , y : t ) i is for the input token x i in x : m( x , y : t ) i = 0 x i is not a constraint 1 x i is not mentioned in y : t 2 x i is mentioned in y : t (1) The values 1 and 2 represent the status of constraint satisfaction.",
"Once y : t satisfies the constraints, the value of the corresponding Mention Flag(s) are updated from 1 to 2.",
"Value 0 is a static default value for all tokens x i that do not correspond to any constraints.",
"They are not required to be mentioned in the outputs.",
"These typically act as instructions to the model.",
"At the start, Mention Flags m( x , ) { 0 , 1 } l x where is the empty string because the empty string does not mention anything.",
"During generation, m is monotonic in y : given decoder input sequence y : t and y :( t +1) , m( x , y : t ) i m( x , y :( t +1) ) i .",
"The Mention Flags for any token x i can only remain unchanged or update from value 1 to 2.",
"Example In Figure 2, given encoder input tokens x = [ name, Tetas, area, South, Bank ] , we start from m( x , ) = [0 , 1 , 0 , 1 , 1] because name and area are not lexical constraints.",
"At step 4, m( x , [ Tetas, is, located ]) = [0 , 2 , 0 , 1 , 1] because Tetas has already been mentioned in the current decoder input sequence [ Tetas, is, located ] .",
"Value Update for Multi-Word Constraints As shown in Figure 2, Mention Flags for the tokens corresponding to the same constraint are updated together.",
"Given encoder input tokens x i , , x j , forming a multi-word constraint, we require that x y : t < S > Tetas is located in the South Bank .",
"m( x , y ) i = = m( x , y ) j for all (partial) outputs y , and m( x , y : t ) i = = m( x , y : t ) j = 2 iff x i , , x j are mentioned in y : t .",
"We use conventions from the relevant data set to determine whether a constraint is a multi-word constraint.",
"This avoids false update when the models only generate the prefix of the constraints, rather than the full constraints.",
"For example, given constraint washing machine, the output could be I put my washing in the new washing machine.",
"The situation becomes more complicated when both washing and washing machine are given lexical constraints.",
"When we find this case, we delay the value 2 update for washing until the word in is generated.",
"Modern tokenization methods, such as BPE (Sennrich et al., 2016), make this situation frequent.",
"Definition of Mentions We deliberately allow a flexible notion of mentions in the Function m() .",
"We can define various types of mentions to fulfill the requirements of different applications and tasks.",
"With this flexibility, the end-users can use Mention Flags in many constraint scenarios.",
"For tasks with strict constraints, we define mentions to be the exact string match in y : t .",
"Otherwise, inflectional variants or synonyms of words in the lexical constraints are allowed when checking for mentions .",
"Our Mention Flag mechanism thus supports lexical constraints with multiple verbalizations.",
"We leave more sophisticated constraints (e.g., using NLP parsers) to future work.",
"During training, given x and ground-truth output Y gt (with l gt tokens), we can construct the ground-truth Mention Flag Matrix F gt { 0 , 1 , 2 } l x l gt by finding the mentioning position of tokens in the lexical constraints in Y gt .",
"F gt follows the same masking strategy as the decoder input tokens y : t .",
"For the tokens whose corresponding lexical constraints having no alignment with Y gt , their Mention Flags are also assigned value 0. During inference, we build the Mention Flag matrix incrementally, starting from F inf , 0 = [m( x , )] { 0 , 1 } l x 1 .",
"In step t , we add a new column m( x , y : t ) to F inf ,t 1 { 0 , 1 , 2 } l x ( t 1) and obtain the new Mention Flag matrix F inf ,t { 0 , 1 , 2 } l x t .",
"Why Mention Flags work During the training of MF models, the ground-truth always has all MFs set to completed before stopping the generation (i.e., before generating EOS Token).",
"This provides a strong signal to satisfy all constraints before completing generation.",
"The value update from 1 to 2 in MF provides implicit signals about where the constraints are satisfied during training.",
"Otherwise, the model has to learn this information via the co-occurring sub-sequences between input sequence and output sequence.",
"These two signals allow the model to achieve high constraint satisfaction and help to maintain high text quality (Sec. 4.5).",
"Since there are only 3 added embeddings, learning does not require a substantial amount of training data (Sec. 4.7).",
"Since these embeddings are indepen-dent of particular lexical constraints, we expect that performance on novel constraints, not seen during training, is improved (Sec. 4.5).",
"As shown in Figure 3, Mention Flags are injected into the Transformer decoder.",
"We first review the standard S2S Transformer proposed in Vaswani et al. (2017), then discuss how to inject Mention Flags information into the S2S Transformer model.",
"Standard S2S Transformer Model The encoder input tokens x is fed into the Transformer Encoder h e = Enc ( x ) where h e R l x d and d is the model hidden size.",
"In the Transformer decoder, there are two self-attention modules, Self MultiHead Attention ( SA ) which handles the current decoder input sequence y : t , and Cross Multi-Head Figure 3: In each decoder layer, the Cross-Attention (CA) module (light blue) integrates Mention Flags as additional inputs describing relationship between encoder contents and decoder input tokens.",
"where h dt = SA ( y : t ) .",
"KV is the standard key-value self-attention proposed in Vaswani et al. (2017).",
"The outputs of CA ( h dt , h e ) further determine the model output y t +1 via a Feed Forward layer, a Residual Connection and a softmax layer.",
"Incorporating Mention Flag Matrix Our two-dimensional Mention Flag matrix F { 0 , 1 , 2 } l x t is associated with the elements from encoder output h e and current decoder input y : t .",
"The optimal way is to incorporate the full F matrix into a component in the Transformer decoder.",
"We note that the CA module in the Transformer decoder already uses y : t as query and h e as key.",
"The resulting query-key similarity matrix has the same size of our Mention Flag matrix, making it suitable to incorporate F .",
"Inspired by Shaw et al. (2018) which incorporates token relative positions into the SA module, we propose to inject Mention Flags as the relative positions between encoder output h e and current decoder input y : t in the CA module.",
"In each decoder layer, we represent F as two sets of trainable embeddings Mention Flag key m k = E k ( F ) and Mention Flag Value m v = E v ( F ) where E k , E v R 3 d are the Mention Flag embedding tables.",
"m k and m v R l x t d .",
"We have separated Mention Flags representations for each decoder layer.",
"Eq.",
"4 is changed to: CA ( h dt , h e , m k , m v ) = R ( W cq h dt , W ck h e , W cv h e , m k , m v ) (5) where R is the Self-Attention function with relative position, defined as follows: R ( q , k , v , m k , m v ) j = l x (cid:88) i =1 a i,j ( v i + m vi,j ) (6) a ,j = Softmax ( e ,j ) (7) e i,j = q j ( k i + m ki,j ) T d (8) As an alternative to representing F as m k and m v , we could follow the approach to relative position in the T5 model (Raffel et al., 2019) and represent F as scalars that are added to the corresponding logits e i,j in Eq.",
"7 used for computing the attention weights.",
"However, we find this scalar approach less effective than our proposed one in Sec. 4.6.",
"We conduct experiments on three benchmarks with different forms of constraints including Commonsense Generative Reasoning ( CommonGen ) (Lin et al., 2020) with keyword constraints, End-to-End restaurants dialog ( E2ENLG ) (Dusek et al., 2020) with key-value constraints, and Novel Object Captioning at scale ( nocaps ) (Agrawal et al., 2019) with visual object word constraints.",
"We integrate Mention Flags with a three-layer standard S2S Transformer models ( Trans, L3 ) (Vaswani et al., 2017) and pre-trained T5 models (Raffel et al., 2019) for each task.",
"The T5 models achieve state-of-the-art results in various Data-to-Text tasks (Kale and Rastogi, 2020).",
"For the T5-Base and T5-Large models, we use the implementation of T5 models in the huggingface transformers 2 .",
"The Trans, L3 models share the same implementation of the T5-Base models, except that it is not initialized with the pre-trained parameters and it only uses 3 layers, rather than 12 layers, for both encoder and decoder.",
"In addition, to improve the generalization of our pre-trained model, we freeze the parameters in the Self-Attention module and Feed-Forward Layers in each 2 https://github.com/huggingface/ transformers layer of the T5 decoder.",
"This parameters freezing technology is applied to both T5 baseline models and the MF models in all of our experiments.",
"We report constraint satisfaction for all tasks.",
"We use GBS in the CommonGen task (max 5 constraints) and CBS in the E2ENLG (max 1 constraint) and nocaps (max 2 constraints) task.",
"In this task, the encoder input is a sequence of concepts C = [ c 1 , , c k ] , k 5 .",
"The models should generate a coherent sentence describing all concepts in C .",
"m( C, ) = [1 , 1 , , 1] and m allows inflectional variants to satisfy lexical constraints.",
"We train (fine-tune) Trans, L3 , T5-Base and T5-Large model as our baselines.",
"We apply Mention Flags to the T5-Base and T5-Large model (+ MF ).",
"Following the suggestions in Lin et al. (2020), we report CIDEr (Vedantam et al., 2015) and SPICE (Anderson et al., 2016) as generated text quality metrics.",
"We calculate constraint satisfaction for all constraints (ALL), novel constraints (Novel) and seen constraints (Seen).",
"Results Table 1 shows that the MF model improves the constraint satisfaction over the baselines for all cases, achieving close to 100% (i.e., 99.6% and 99.1%).",
"Notably, Mention Flags improve novel constraint satisfaction from 2.3% to 49.2% in the randomly initialized Transformer models.",
"Compared to the LevenTrans (Gu et al., 2019) and ConstLeven (Susanto et al., 2020) models, our Trans, L3 + MF model achieves higher CIDEr and SPICE scores with constraint satisfaction 4.1% lower than the non-autoregressive ConstLeven model.",
"While GBS provides a way to maximise constraint satisfaction (i.e., 100%), doing so significantly degrades the output text quality (more than 50 CIDEr).",
"Our MF model achieves near optimum constraint satisfaction while improving text quality (5.7 CIDEr score improvement in T5-Base and 6.5 CIDEr score improvement in T5-Large ).",
"Finally, our T5-Large + MF model outperforms the previous state-of-the-art result (Liu et al., 2021), which integrates the ConceptNet (Speer et al., 2017) into the BART model, by 6.5 CIDEr and 0.7 SPICE, suggesting that pre-trained language models with textual concepts may provide sufficient information for this task.",
"In this task, the encoder input is a sequence of key-value meaning representations C = [ k 1 , v 1 , , k n , v n ] , n 8 .",
"We lists all given key-value information as a space-separated string.",
"m( C, ) = [0 , 1 , 0 , 1 , , 0 , 1] and m allows synonyms to satisfy lexical constraints.",
"For example, welcome children and is family friendly are both mentions of familyFriendly[yes] .",
"The models must generate a fluent and coherent dialog response using all key-value pairs in the encoder.",
"E2ENLG includes 79 different in-domain key-value constraints.",
"We use the scripts from Dusek et al. (2019) 3 to construct the synonyms set for these inputs.",
"We use Trans, L3 and T5-Base model as our baselines.",
"We use CBS to constrain the T5 model to satisfy all missing constraints (T5-Base + C).",
"We report NIST (Lin and Hovy, 2003), BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) as they are common metrics for evaluating the quality of long text in the E2ENLG outputs (more than 20 tokens).",
"Results Table 2 shows that the MF models consistently achieve higher output text quality and constraint satisfaction than the baseline models (99.9% vs. 95.1% and 100% vs. 96.6%).",
"CBS improves the T5 model's constraint satisfaction, but negatively affects the text quality (0.3 BLUE points lower).",
"Shen et al. (2019), the previous state-of-the-art, trained the model via a complex speaker-listener approach inspired by cognitive science.",
"With a much simpler model architecture (S2S), our T5 + MF model achieves full constraint satisfaction and outperforms Shen et al. (2019) by 0.2 NIST and 0.3 METEOR.",
"Using T5 for Image Captioning In Image Captioning, each input image is represented by a sequence of visual objects.",
"Each of these objects is assigned (by the object detector) with a textual label.",
"The encoder input is a sequence of objects followed by the same textual labels C = [ v 11 , , v s 1 1 , l 1 , , v 1 k , , v s k k , l k ] where v i is the visual feature vector (similar to the one in Li et al. (2020)) and l i is the corresponding textual label.",
"The visual features are used in the same way of normal textual tokens in the T5 models.",
"We find this approach works well for both nocaps and standard COCO image captioning task.",
"Experiment Setup Traditional image captioning models select and describe a subset of input objects jointly (Anderson et al., 2018).",
"However, Pudup-pully et al. (2019) shows the benefits of separating content selection and text planning steps for general data-to-text tasks.",
"Following this, we propose to first select salient objects and incorporate the selected objects into the description using Mention Flags.",
"m( C, ) = [0 , 0 , , 1 , , 0 , 0 , , 1] where only salient object labels receive value 1. m() allows inflectional variants to satisfy lexical constraints.",
"We use T5-base model in this experiment.",
"The T5 + C and T5 + MF + C models are constrained with CBS.",
"Following Wang et al. (2021), we report CIDEr and SPICE as output text quality metrics and constraint satisfaction for novel constraints (Novel) and all constraints (ALL).",
"We present the performance for all evaluation images ( Overall ) and for the challenging images with only novel objects ( out-of-domain split).",
"Salient Object Selector We use a transformer-based salient object detector to select a subset of object labels as lexical constraints.",
"The visual representations of detected image objects are first fed into the 3-layer standard Transformer model without any positional embedding.",
"We train this detector using binary Cross-Entropy loss averaged over all detected input objects.",
"The training data for salient object detection is the training data in nocaps .",
"We use COCO 2017 Dev set as the evaluation dataset to select the best checkpoint.",
"Results Mention Flags achieve optimal constraint satisfaction in almost all cases.",
"In particular the Trans, L3 + MF model shows marked improvement (i.e., from 16.3% to 49.3%) on novel constraints, despite the fact that the corresponding token embeddings are not changed from their random initialisation.",
"The generated text quality is also improved, particularly in the out-of-domain split.",
"The T5 + C model is 0.3 SPICE lower in both overall and the out-of-domain split than the T5 + MF model, indicating that the MF model correctly captures more long-range relationships (calculated by the parsing trees used in SPICE) among the (novel) objects than CBS.",
"Our T5 + MF model outperforms the existing state-of-the-art end-to-end single-stage image captioning systems (Agrawal et al., 2019; Li et al., 2020; Wang et al., 2021) by 1.3 CIDEr and 0.1 SPICE on the validation set and 1.7 CIDEr and 0.2 SPICE on the test set, showing the advantage of our two-stage captioning model empowered by Mention Flags.",
"VIVO + C (Hu et al., 2020) is not comparable as it uses additional visual-text aligned training data.",
"Finally, we investigate the relatively lower constraint satisfaction in nocaps (98.3% vs. 99.5+%) compared to the MF models in the other two tasks and find that missing cases frequently happen in the instances with two constraints involving",
"a) (near-) synonymy (e.g., mule and horse) and",
"b) hyponymy (e.g., hot dog and fast food).",
"A more advanced salient object detector would solve this issue.",
"The MF models use standard beam search and run much faster with less memory than the constrained beam search algorithms.",
"For comparison, we select the GBS algorithm because its resource use is linear in the number of constraints and uses less run time and memory than CBS.",
"We run the MF models and the models with GBS using beam size 5 and compare their run time (RT) and memory requirement (#M) in Table 4.",
"Compared to the MF models, GBS runs one to two orders of magnitude slower, and uses 4.4 to 23.4 times more memory.",
"Compared to the T5-Base model, the MF models only increases the inference time slightly.",
"Constraint Satisfaction & Text Quality In all tasks, MF models improve the text quality over their baselines (including CBS and GBS) while achieving constraint satisfaction that is close to 100%.",
"Non-Pre-trained vs. Pre-trained Models In all tasks, Mention Flags have a similar effect (higher text quality and constraint satisfaction) on both non-pre-trained and pre-trained models.",
"This indicates that Mention Flags do not rely on information from pre-trained models to be effective.",
"Novel Constraints In the CommonGen and nocaps tasks, the Trans, L3 + MF model achieve much higher coverage (i.e., 2.3% to 49.2% in CommonGen ; 16.3% to 49.3% in nocaps ) for constraints with novel lexical items than the baseline models.",
"Here, the MF models can satisfy novel constraints, even where the corresponding token representations did not receive any training signals.",
"As Mention Flags decouples with model representations, the MF models learn lexicon-independent indicators to mention the novel words.",
"4.6 Design Choices for Mention Flags We conduct experiments for following choices of Mention Flag: Static MF where value 2 ( is mentioned ) and 1 ( not mentioned ) are merged; Merged MF where value 0 ( not a constraint ) is merged with value 1; Scalar MF where Mention Flags are represented as scalars added to the attention logits in the CA module; and Shared MF where all decoder layers use the same Mention Flag embeddings.",
"We apply Static MF , Scalar MF and Shared MF to all three tasks.",
"We only use Merged MF in E2ENLG because a CommonGen model does not include value 0 and a nocaps model without value 0 cannot distinguish between constrained and non-constrained objects.",
"As shown in Table 5, in the CommonGen and nocaps tasks, the Static MF models achieve much lower constraint satisfaction, 99.6% vs. 94.5% and 98.3% vs. 87.2% respectively.",
"The explicit update from value 1 to 2 is important for high constraint satisfaction.",
"The merged MF model produces lower constraint satisfaction (100% to 98.9%) and generated text quality (68.3 BLEU to 67.7 BLEU) in E2ENLG , indicating the utility of value 0 in this task.",
"Compared to the MF models, Scalar MF models produce lower constraint satisfaction in the CommonGen and nocaps task (99.6% to 97.1%, 98.3% to 91.5%, respectively) and lower-quality generated text in all three tasks (1.2 BLEU, 3.2 CIDEr and 0.6 CIDEr lower).",
"Representing Mention Flags as Key and Value dense E2ENLG BLEU NIST METEOR Con.",
"vectors works better than scalars.",
"Finally, using shared MF across all decoder layers has negative impact (e.g., all constraint satisfaction ratio drop) in all three tasks.",
"This section shows that Mention Flags are still useful for improving the constraint satisfaction and generated text quality when trained with many fewer instances.",
"We use 0.1%, 1% and 10% of the original training instances to train the models.",
"In the first two tasks ( E2ENLG and CommonGen ), we compare the MF models with T5-Base models.",
"In the nocaps task, we additionally compare the T5-Base + MF model with the T5-Base + C model.",
"We report BLEU in E2ENLG CIDEr in CommonGen and nocaps .",
"As shown in Table 6, the MF models consistently generate higher-quality text (higher METEOR or CIDEr Score) and achieve higher constraint satisfaction than the baseline models.",
"The MF models reach 97+% when only training with 10% of the E2ENLG and CommonGen training data.",
"This confirms our claim in Sec. 3.2 that the three added Mention Flag embeddings can be learned with relatively little training data.",
"We chose three representative examples that illustrate successful use of Mention Flags (Table 7).",
"i) The MF model generates the most concise dialogue response, compared to the baseline and constrained decoding model;",
"ii) The MF model is the only model that generates a fluent and coherent sentence satisfying all input constraints;",
"iii) The MF model is the only model that accurately describes the relationship between bee and flower , grounding to the input images and constraints.",
"Human Evaluation We have shown that our proposed MF model can achieve higher constraint satisfaction ratio and automatic metrics.",
"However, the automatic metrics do not necessarily reflect human preference of the generated text.",
"We therefore select 100 output samples from the T5 baseline and our MF model in all three tasks (300 in total).",
"For each sample pair, we ask three annotators to judge which sample is more human-like.",
"Table 8 shows that more than 70% of output of our MF model is generally better or similar than the output of the baseline model, verifying the output quality of our MF model.",
"In this paper, we propose Mention Flags to constrain Transformer-based text generators via injecting mention status embeddings into text decoders.",
"Our extensive experiments on three different tasks have shown the effectiveness of Mention Flags in maintaining high generated text quality and excellent constraint satisfaction, comparing favourably to competitive constrained decoding algorithms.",
"We plan to expand Mention Flags",
"i) to control larger input source text such as constrained text summarization and machine translation;",
"ii) to handle larger granularity such as sentence-level.",
"We thank anonymous reviewers for their insightful suggestions to improve this paper.",
"This research was supported by a Google award through the Natural Language Understanding Focused Program, by a MQ Research Excellence Scholarship and a CSIRO's DATA61 Top-up Scholarship, and under the Australian Research Councils Discovery Projects funding scheme (project number DP160102156)."
] | [
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Previous work indicates that discourse information benefits summarization.",
"In this paper, we explore whether this synergy between discourse and summarization is bidirectional, by inferring document-level discourse trees from pre-trained neural summarizers.",
"In particular, we generate unlabeled RST-style discourse trees from the self-attention matrices of the transformer model.",
"Experiments across models and datasets reveal that the summarizer learns both, dependencyand constituency-style discourse information, which is typically encoded in a single head, covering longand short-distance discourse dependencies.",
"Overall, the experimental results suggest that the learned discourse information is general and transferable inter-domain 1 .",
"Extractive summarization is a common and important task within the area of Natural Language Processing (NLP) , which can be useful in a multitude of diverse real-life scenarios.",
"Current extractive summarizers typically use exclusively neural approaches, in which the importance of extracted units (i.e., sentences or clauses) and relationship between them are learned by the model from a large amount of data (e.g., Liu and Lapata (2019b)).",
"Inspired by previous work in pre-neural times, indicating that discourse information, especially discourse trees according to the Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), can benefit the summarization task (Marcu, 1999), several very recent neural summarizers have tried to explicitly encode discourse information to support summarization.",
"Overall, it seems that adding these encodings, consistent with pre-neural results, is beneficial.",
"In particular, injecting discourse has been shown to either improve performance 1 The code can be found in https://github.com/ Wendy-Xiao/summ_guided_disco_parser on the extractive summarization task itself (Xu et al., 2020), or allow for a substantial reduction in the number of the summarizer's parameters, while keeping competitive performance (Xiao et al., 2020).",
"The central hypothesis we are exploring in this paper is whether the synergy between discourse parsing and summarization is bidirectional.",
"In other words, we examine if summarization is a useful auxiliary task to infer discourse structures.",
"Liu et al. (2019b) performed a preliminary investigation of this conjecture, showing that structural information can be inferred from attention mechanisms while training a neural model on auxiliary tasks.",
"However, they did not perform any comparison against ground-truth discourse trees.",
"Further, recent work showed that discourse trees implicitly induced during training are oftentimes trivial and shallow, not representing valid discourse structures (Ferracane et al., 2019).",
"In this paper, we address these limitations by explicitly exploring the relationship between summarization and discourse parsing through the inference of document-level discourse trees from pretrained summarization models, comparing the results against ground-truth RST discourse trees.",
"Besides Liu et al. (2019b), our idea and approach are inspired by recent works on extracting syntactic trees from pre-trained language models (Wu et al., 2020) or machine translation approaches (Raganato and Tiedemann, 2018), as well as previous work on knowledge graph construction from pre-trained language models (Wang et al., 2020).",
"Specifically, we generate full RST-style discourse trees from self-attention matrices of a pre-trained transformer-based summarization model.",
"We use three different tree-aggregation approaches (CKY (Jurafsky and Martin, 2014), Eisner (Eisner, 1996) and CLE (Chu and Liu, 1965; Edmonds, 1967)), generating a set of constituency and dependency trees representing diverse discourse-related attributes.",
"Our proposal is thereby addressing one of the key limitations in discourse parsing, namely the lack of large training corpora.",
"We aim to overcome this limitation by generating a large number of reasonable quality discourse trees from a pre-trained summarization model, similar in spirit to what Huber and Carenini (2020) did with sentiment.",
"Admittedly, the discourse information captured with our approach is summarization task-specific, however, our generated discourse treebank can be combined with further task-dependent treebanks (e.g. from sentiment) to train more powerful discourse parsers in a multitask framework.",
"Generally speaking, the ability to infer discourse trees as a by-product\" of the summarization task can also be seen as a form of unsupervised discourse parsing, where instead of leveraging pretrained language models like in Kobayashi et al. (2019), we exploit a pre-trained neural summarizer. We empirically evaluate our method on three datasets with human RST-style annotations, covering different text genres. Multiple experiments show that the summarization model learns discourse information implicitly, and that more dependency information are captured, compared to structural (i.e., constituency) signals. Interestingly, an additional exploration of the attention matrices of individual heads suggests that, for all models, most of the discourse information is concentrated in a single head, and the best performing head is consistent across all datasets. We further find that the dependency information learned in the attention matrix covers long distance discourse dependencies. Overall, the results are consistent across datasets and models, indicating that the discourse information learned by the summarizer is general and transferable inter-domain. 2 Related Work Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) is one of the most popular theories of discourse, postulating that a document can be represented as a constituency tree, where leaves are clause-like Elementary Discourse Units (EDUs), and internal nodes combine their respective children by aggregating them into a single, joint constituent. Each internal node also has a nuclearity attribute 2 , representing the local importance of their direct child-nodes in the par-ent context from the set of {Nucleus-Nucleus, 2 In this paper we do not consider rhetorical relations. Nucleus-Satellite, Satellite-Nucleus}. Nucleus\" child-nodes thereby generally play a more important role when compared to a Satellite\" child-node. Although standard RST discourse trees are encoded as constituency trees, they can be converted into dependency trees with near isomorphic transformations. In this work, we infer both, constituency and dependency trees. Over the past decades, RST discourse parsing has been mainly focusing on supervised models, typically trained and tested within the same domain using human annotated discourse treebanks, such as RST-DT (Carlson et al., 2002), Instruction-DT (Subba and Di Eugenio, 2009) or GUM (Zeldes, 2017). The intra-domain performance of these supervised models has consistently improved, with a mix of traditional models by Joty et al. (2015) and Wang et al. (2017), and neural models (Yu et al., 2018) reaching state-of-the-art (SOTA) results. Yet, these approaches do not generalize well inter-domain (Huber and Carenini, 2020), likely due to the limited amount of available training data. Huber and Carenini (2019) recently tackled this data-sparsity issue through automatically generated discourse structures from distant supervision, showing that sentiment information can be used to infer discourse trees. Improving on their initial results, Huber and Carenini (2020) published a large-scale, distantly supervised discourse corpus (MEGA-DT), showing that a parser trained on such treebank delivers SOTA performance on the more general inter-domain discourse parsing task. In this paper, we also tackle the data sparsity problem in discourse parsing, however, using a significantly different approach. First, instead of relying on sentiment, we leverage the task of extractive summarization. Second, instead of a method for distant supervision, we propose an unsupervised approach. The area of unsupervised RST-style discourse parsing has been mostly underlooked in the past, with recent neural approaches either taking advantage of pre-trained language models to predict discourse (Kobayashi et al., 2019) or using pre-trained syntactic parsers and linguistic knowledge (Nishida and Nakayama, 2020) to infer discourse trees in an unsupervsied manner. Similarly. our proposal only relies on a pre-trained neural summarization model to generate discourse trees. Recent neural summarization models are typically based on transformers (Liu and Lapata, 2019a; Zhang et al., 2019). One advantage of these models is that they learn the relationship between input units explicitly using the dot-product self-attention, which allows for some degree of exploration of the inner working of these complex and distributed models. Here, we investigate if the attention matrices of a transformer-based summarizer effectively capture discourse information (i.e., how strongly EDUs are related) and therefore can be used to derive discourse trees for arbitrary documents. Marcu (1999) pioneered the idea to directly apply RST-style discourse parsing to extractive summarization , and empirically showed that RST discourse information can benefit the summarization task, by simply extracting EDUs along the nucleus path. This initial success was followed by further work on leveraging discourse parsing in summarization, including McDonald (2007), Hirao et al. (2013), and Kikuchi et al. (2014). More recently, the benefits of discourse for summarization have also been confirmed for neural summarizers, e.g. in Xiao and Carenini (2019) and Cohan et al. (2018), using the structure of scientific papers (i.e. sections), and in Xu et al. (2020), successfully incorporating RST-style discourse and co-reference information in the BERTSUM summarizer (Liu and Lapata, 2019b). In contrast to previous approaches demonstrating how discourse can enhance summarization performance, we have recently shown that discourse enables the specification of simpler neural summarizers, without affecting their performance (Xiao et al., 2020). In particular, by using a fixed discourse-based attention they achieve competitive results compared to learnable dot-product self-attention mechanisms, as used in the original transformer model. Inspired by these findings, suggesting that transformer-based summarization models learn effective discourse representations, we explore if useful discourse structures can be inferred from learnt transformer self-attention weights. Admittedly, Liu and Lapata (2018) and Liu et al. (2019b) presented preliminary work on inferring discourse structures from attention mechanisms, while training a neural model on auxiliary tasks, like text classification and summarization. However, they did not perform any comparison against ground-truth discourse trees as we do here. More importantly, we employ a more explicit approach to infer discourse structures, not as part of the learning process, but extracting the structures after the summarization model is completely trained and applied to new documents. While our focus is on discourse, extracting syntactic constituency and dependency trees from transformer-based models has been recently attempted in both, machine translation and language modelling. In machine translation, Marecek and Rosa (2019) and Raganato and Tiedemann (2018) show that trained translation models can capture syntactic information within their attention heads, using the CKY and CLE algorithms, respectively. In pre-trained language models, Wu et al. (2020) propose a parameter-free probing method to construct syntactic dependency trees based on a pretrained BERT model, only briefly elaborating on possible applications to discourse. In contrast to our work, they do not directly use attention heads, but instead build an impact matrix based on the distance between token representations. Furthermore, while their BERT-based model cannot deal with long sequences, our two-level encoder can effectively deal with sequences of any length, which is critical in discourse. 3 Our Model 3.1 Framework Overview Our main goal is to show the ability of a previously trained summarization model to be directly applied to the task of RST-style discourse parsing. Along this line, we explore the relationship between information learned by the transformer-based sumarizer and the task of discourse parsing. We leverage the synergies between units learned in the transformer model by following Xiao et al. (2020), previously proposing the use of a transformer document-encoder on top of a pretrained BERT EDU encoder. This standard summarization model is presented in Figure 1 (left). In the transformer-based document encoder, each head internally contains a self-attention matrix, learned during the training of the summarization model, representing the relationship between EDUs (Figure 1 (center)). In this paper, we analyze these learned self-attention matrices, not only to confirm our intuition that they contain relevant discourse information, but also to computationally exploit such information for discourse parsing. We therefore generate a set of different (constituency/dependency) discourse trees from the self-attention matrices, focusing on different attributes of discourse, as shown in Figure 1 (right). Our generated constituency trees only reveal the discourse tree structure without additional nucle-Figure 1: The pipeline of our whole method. arity and relation attributes. More interestingly, we complement the constituency interpretation of the self-attention by additionally inferring a dependency tree, also partially guided by discourse structures, but mostly driven by the RST nuclearity attribute, which is shown to be more related to the summarization task where the importance of the different text spans is critical (Hirao et al., 2013). We present and discuss the different parsing algorithms to extract discourse information from the self-attention matrix next. 3.2 Parsing Algorithms Formally, for an input document D = { u 1 , .., u n } with n EDUs, each attention head returns an attention matrix A R n n where entry A ij contains a score measuring how much the i -th EDU relies on the j -th EDU. Given those bidirectional scores defining the relationship between every two EDUs in a document, we build a tree such that EDU pairs with higher reciprocal attention scores are more closely associated in the resulting tree. In the constituency case, this means that EDUs with higher mutual attention should belong to sub-trees on lower levels of the tree, while in the dependency case this implies that the path between such EDUs should contain less intermediate nodes. In essence, these requirements can be formalized as searching for the tree within the set of possible trees, which maximizes a combined score. 3.2.1 Constituency Tree (C-Tree) Parsing To generate a constituency tree from the attention matrix, we follow a large body of previous work in discourse parsing (e.g., Joty et al. (2015)), where constituency discourse trees are generated using the CKY algorithm (Jurafsky and Martin, 2014). Specifically, we fill a n n matrix P R n n generating the optimal tree in bottom-up fashion using the dynamic programming approach according to: P ij = 0 , i > j (cid:80) nk =1 ( A ki ) , i = j max j 1 k = i ( P ik + P ( k +1) j + avg( A i : k, ( k +1): j ) + avg( A ( k +1): j,i : k )) / 2 , i < j where P ij with i = j contains the overall importance of EDU i , computed as the attention paid by others to unit i . P ij with i < j represents the score of the optimal sub-tree spanning from EDU i to EDU j . We select the best combination of sub-trees k , such that the sum of the left subtree spanning [ i : k ] and the right one spanning [( k + 1) : j ] , along with the average score of connections between the two sub-trees is maximized. Figure 2: Example of CKY constituency parsing. For example, to pick the structure of the sub-tree spanning EDUs [3 : 5] (see Fig. 2), we need to decide between the potential sub-tree aggregation of ((34)5) and (3(45)) . The respective scores are computed based on the scores in green and blue blocks in both the CKY and the Attention Matrices. Following this algorithm, two sub-trees with a high attention score between them tend to be combined on lower levels of the tree, indicating they are more related in the discourse tree. Besides the standard CKY algorithm described Figure 3: Chu-Liu-Edmonds Algorithm with sentence constraints above, we also explore a hierarchical CKY approach with sentence and paragraph constraints. Specifically, we do not aggregate P ij if the span [ i : j ] crosses a sentences boundary where either sentence is incomplete. In the previous example, if EDU 3 and EDU 4 were in the same sentence, even if the score of the blue aggregation candidate was higher, we would choose the green sub-tree aggregation. Plausibly, this hierarchical approach will perform better, since the ground-truth treebanks mostly contain sentences and paragraphs that are covered by a complete discourse sub-trees. 3.2.2 Dependency Tree (D-Tree) Parsing For the dependency tree generation, we use the Eisner (Eisner, 1996) and Chu-Liu-Edmonds algorithm (Chu and Liu, 1965; Edmonds, 1967) to generate projective and non-projective dependency trees, respectively 3 . First, we convert the attention matrix A into a fully connected graph G = ( N, E ) , where N contains all the EDUs, and e ij , indicating how much the i -th EDU influences the j -th EDU, corresponds to A ji , which is the attention that the j th EDU pays to the i -th EDU. Based on this graph, we apply the following algorithms: Eisner Algorithm: We apply this dynamic programming algorithm to generate projective dependency trees. Thereby, we build a matrix P R n n 2 2 , in which the first and second dimen-3 Mixed\" approaches, dealing with mildly non-projective trees (Kuhlmann and Nivre, 2006), are left for future work.",
"sions contain the start and end indexes of sub-trees, similar to the CKY algorithm; while the third and fourth dimensions indicate whether the head is the start or the end unit, and whether the sub-tree is completed.",
"As done for constituency parsing, we also use a hierarchical version of Eisner's algorithm, in which we restrict inter-sentence connections for incomplete sentence trees.",
"Since the Eisner algorithm can only generate projective dependency trees, it will be inaccurate for documents with a non-projective discourse structure.",
"Chu-Liu-Edmonds (CLE) Algorithm: Originally proposed as a recursive approach to find the maximum spanning tree of a graph given its root, CLE can generate non-projective trees.",
"In the unconstrained case, we simply follow the standard CLE algorithm, selecting the EDU with the highest importance score, computed similar to Sec. 3.2.1, i.e. root = argmax i (cid:80) nk =1 ( A ki ) , as the root.",
"From there, the algorithm selects the optimal edges\", i.e. the maximum in-edges for each node except the root, breaking the cycles recursively.",
"Again, as we did for CKY and Eisner, we also apply the additional sentence constraint.",
"Unlike for the dynamic programming approaches, which build the trees in a bottom-up fashion and can directly be constrained to avoid cross-sentence aggregations of incomplete sentences, we need to substantially modify CLE to allow for sentence constraints.",
"(b)), in which e sSD = avg s S,d D e sd , and record the maximum edge corresponding to the edge between sentences, i.e. argmax s S,d D e sd .",
"After that, we use the CLE algorithm within the sentence containing the root EDU as the root sentence to find the maximum spanning tree in G s (Figure 3",
"(c)).",
"We then add the corresponding EDU edges to the final tree (Figure 3",
"(d)).",
"For example, the edge ( s 0 , s 1 ) in G s corresponds to the EDU edge ( e 0 , e 2 ) in G .",
"Next, we treat nodes with incoming edges from other sentences as the root of the sentence itself and run the CLE algorithm within each sentence (Figure 3",
"(e)).",
"The final tree (Figure 3",
"(f)) is eventually formed as the combination of inter-sentence edges derived in sentence graph G s and intra-sentence edges found within each sentence.",
"In order to show the generality of the discourse structures learned in the summarization model, we train our summarizer across a variety of datasets and hyper-parameter settings.",
"More specifically, we train on two separate, widely-used news corpora CNN Daily Mail (CNNDM) (Nallapati et al., 2016) and NYT (Sandhaus, 2008) , as well as under three hyper-parameter settings with different numbers of layers and attention heads:",
"(a) A simple model with 2 layers and a single head.",
"(b) 6 layers with 8 heads each, proposed in the original transformer model(Vaswani et al., 2017).",
"(c) 2 layers with 8 heads each, constituting a middle ground between the previous two settings.",
"By considering two corpora (CNNDM and NYT) and the three settings, we train six models, which we call: CNNDM-2-1, CNNDM-6-8, CNNDM-2-8, NYT-2-1, NYT-6-8, NYT-2-8 4 .",
"assessed on three discourse datasets (see Table 1).",
"RST-DT is the largest and most frequently used RST-style discourse treebank (Carlson et al., 2002), containing news articles from the Wall Street Journal.",
"Since this is the genre of both our summarization training corpora, the experiments testing on this dataset are intra-domain.",
"Instruction-DT contains documents in the home-repair instructions domain (Subba and Di Eugenio, 2009).",
"We categorize the experiments on 4 Complete evaluation results for all six models are presented in Appendix A. this dataset as cross-domain.",
"GUM contains documents from eight domains including news, interviews, academic papers and more (Zeldes, 2017).",
"Since the GUM corpus is multi-domain, the performance on this dataset will reveal the generalizability of generated trees in a broader sense.",
"All three discourse datasets contain ground-truth RST-style consituency trees.",
"While all corpora contain potential non-binary sub-trees, Instruction-DT also includes multi-root documents.",
"To account for these cases, we apply the right-branching bi-narization following Huber and Carenini (2019).",
"Furthermore, we convert constituency trees with nuclearity into ground truth dependency trees using the algorithm proposed in Li et al. (2014) .",
"To evaluate how well the generated trees align with ground-truth trees, we use RST Parseval Scores for constituency trees and Unlabeled Attachment Score for dependency trees, measuring the ratio of matched spans and the ratio of matched dependency relations, respectively.",
"For each model configuration, we run a set of experiments using the average attention matrix across all heads in a layer, i.e. A avg = (cid:80) h A h /H , with H as the number of heads.",
"This initial setup is intended to provide insights into the discourse information learned in each layer.",
"The results of the three tree-generation algorithms are shown in Table 2, 3 and 4 along with the performance of a random baseline obtained by running the algorithms on 10 random matrices.",
"Here, we present the results of three selected models, limited to the performance of the first two layers for the 6-layer models, to allow for a direct comparison to the 2-layer models 5 .",
"Across evaluations, the layer-wise performance within the same models are rather distinct, indicating that different properties are learned in the layers.",
"This finding is in line with previous work (Liu et al., 2019a), especially 5 Results for all six models can be found in Appendix B. Model No Cons.",
"given that the performance of each layer is consistent across constituency and dependency parsing outputs for all datasets.",
"Furthermore, the more layers the summarization model contains, the smaller the performance gap between layers becomes.",
"We believe that this could be caused by the discourse information being further spread across different layers.",
"Generally, we observe that models trained on the CNNDM dataset perform better than models trained on the NYT corpus, despite the larger size of the NYT dataset.",
"Plausibly, the superior performance of our models trained on CNNDM potentially reflects a higher diversity within documents in the CNNDM dataset.",
"Comparing the constituency tree performance in Table 2 against the dependency tree results in Tables 3 and 4, we can clearly see that the improvement of the constituency parsing approach over the random baseline is much smaller than the improvements for the generated dependency parse-trees.",
"Presumably, this larger improvement for the dependency trees is due to the fact that dependency relationships (strongly encoding the nuclearity attribute) are more directly related to the summarization task than the plain structure information.",
"This is in line with previous work on applying dependency trees to the summarization task (Hirao et al., 2013; Xu et al., 2020) and indicates that the learned attention matrices contain valid discourse information.",
"As for the two approaches to dependency parsing, although Eisner generally outperforms CLE, the improvement over random trees is larger for CLE.",
"We believe that this effect is due to the reduced constraints imposed on the CLE algorithm, which is not limited to generate projective trees.",
"Considering all three methods, the results of the CLE-generated dependency tree seem most promising.",
"A possible explanation is that both CKY and Eisner build the discourse tree in a bottom-up fashion with dynamic programming.",
"This way, only local information is used on lower levels of the tree.",
"On the other hand, the CLE algorithm uses global information, potentially more aligned with the summarization task, where all EDUs are considered to predict importance scores.",
"While all previous results rely on the average attention matrices, we now analyze whether discourse information is evenly distributed across attention heads, or if a subset of the heads contains the majority of discourse related information.",
"We describe this analysis only for CLE for two reasons:",
"(a) the summarization model seemingly captures more dependency-related discourse information than structure information;",
"(b) compared with Eisner, the CLE approach is more flexible, by also covering non-projective dependency trees.",
"Since the results across all summarization models are consistent, we only show the accuracy heatmap for the CNNDM-6-8 model on the three RST-style discourse datasets in Figure",
"4. Remarkably, for all three datasets, there is one head in the model capturing the vast majority of discourse information, especially in the unconstrained case.",
"Furthermore, the performance of the best single attention head is much better than the one of the average attention matrix shown in section 4.4 (e.g. 34 . 53 compared to 19 . 51 on the GUM dataset without sentence constraints).",
"These intriguing findings will be further explored in future work.",
"Localness of Trees: To further verify that the generated trees are non-trivial, for instance simply connecting adjacent EDUs, we analyze the quality of the trees produced with the second attention head on the second layer, which is the top performer among all the heads shown in Figure",
"4. First, we separate all dependency relationships into two classes: local , holding between two adjacent EDUs, and distant , including all other relations between non-adjacent EDUs.",
"Then we compute the ratio of the correctly predicted dependencies which are local (Local Ratio Corr.), as well as the Measurement(%) No Cons.",
"ratio of local dependencies in the generated trees (Local Ratio Ours), and in the ground-truth trees (Local Ratio GT).",
"The results of this analysis are shown in Table",
"5. For all datasets, the ratio of correctly predicted local dependencies (Local Ratio Corr.) (being > 50 ) is larger than the ratio for distant relations, which appears reasonable, since local dependency predictions are easier to predict than distant ones.",
"Further, comparing (Local Ratio GT) and (Local Ratio Ours) without the sentence constraint (first column) shows that the number of local dependency relations in the ground-truth discourse trees is consistently larger than the predicted number.",
"This indicates that the discourse information learned in the attention matrices goes beyond the oftentimes predominant local positional information.",
"However, even without the sentence constraint (first column), when the CLE algorithm can predict trees of any form, more than 40% of the relations are predicted as local, suggesting that the standard CLE approach can already capture local information well.",
"Branch Height Leaf Arc vac.",
"(%) RST-DT Ours(Sent Cons) 1.50 27.06 0.37 0.10 3% Ours(No Cons) 1.74 25.76 0.49 0.12 3% GT Tree 2.10 8.19 0.51 0.13 2% Instruction Ours(Sent Cons) 1.56 15.74 0.39 0.13 3% Ours(No Cons) 1.80 14.35 0.50 0.14 3% GT Tree 1.59 8.49 0.41 0.15 1% GUM Ours(Sent Cons) 1.61 44.94 0.40 0.05 0% Ours(No Cons) 2.14 43.08 0.54 0.08 0% GT Tree 2.02 12.17 0.51 0.04 0% Table 6: Statistics of our generated trees and the gold standard trees in terms of the average branch width , average height , average leaf ratio (micro) , average normalized arc length of the trees and percentage of the Vacuous trees .",
"we find that the local dependency ratio of the generated trees (Local Ratio Ours) further increases by more than 10% across all three datasets.",
"This makes intuitive sense, since the sentence constraint forces the generated trees to purely focus on local aspects within each sentence.",
"To sum up, we find that the learned attention matrices contains both local and distant dependency information, although local dependency predictions perform better.",
"Properties of Trees: Following Ferracane et al. (2019), we structurally inspect the generated dependency trees, and compare them with the gold trees on all three datasets.",
"This comparison is presented in Table 6, showing the average branch width , average height , average leaf ratio (micro) and average normalized arc length of the trees as well as the percentage of vacuous trees in each dataset 6 .",
"Looking at Table 6, it appears that our tree structure properties are similar to the ground-truth properties in regards to all measures except the height of the tree, which indicates that our trees tend to be generally deeper than gold standard trees, despite having a similar branch width and leaf ratio.",
"Furthermore, our trees are even deeper when using the sentence constraint.",
"Plausibly, by forcing each sentence to have its own sub-tree can make shallower inter-sentential structures less likely.",
"Exploring potential causes for the difference in tree-height, possibly due to the summarization task itself, are left as future work.",
"To investigate whether the performance is consistent cross different random initializations, and to explore the influence of the results with respect to the quality of the summarizer, we perform additional experiments with the 'CNNDM-6-8' model 7 .",
"Overall, we find that the performance is rather similar across random initializations.",
"Interestingly, a single head consistently shows better performance than all other heads across different initialization as well as datasets; however, while the position of the top-performing head is not always the same, it is often located in the second layer of the model.",
"Regarding the second experiment exploring sensitivity to the summarizer quality, we create summarizers of increasing quality by providing more and more training.",
"As expected, we find that as the summarization model is trained for additional steps, more accurate discourse information is learnt, concentrated in a single head.",
"We present a novel framework to infer discourse trees from the attention matrices learned in a transformer-based summarization model.",
"Experiment across models and datsets indicates that both dependency and structural discourse information are learned, that such information is typically concentrated in a single head, and that the attention matrix also covers long distance discourse dependencies.",
"Overall, consistent results across datasets and models suggest that the learned discourse information is general and transferable inter-domain.",
"In the future, we want to explore if simpler summarizers like BERTSUM (Liu and Lapata, 2019b) can also capture discourse info; specifically studying if the importance of the heads corresponds to the captured discourse info, which may help pruning summarization model by incorporating discourse info, in spirit of Xiao et al. (2020).",
"With respect to dependency tree generation possible improvements could come by looking for additional strategies balancing between guidance and flexibility, as Kuhlmann and Nivre (2006) explore for syntactic dependency parsing.",
"To address the problem of data sparsity in discourse parsing, we want to synergistically leverage other discourse-related tasks, in addition to sentiment and summarization, like topic modeling.",
"7 More details can be found in Appendix C. Acknowledgments We thank the anonymous reviewers and the UBC-NLP group for their insightful comments.",
"This research was supported by the Language & Speech Innovation Lab of Cloud BU, Huawei Technologies Co., Ltd.",
"We further acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).",
"Nous remercions le Conseil de recherches en sciences naturelles et en gnie du Canada (CRSNG) de son soutien."
] | [
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain"
] |
[
"Recent work has shown that contextualized word representations derived from neural machine translation are a viable alternative to such from simple word predictions tasks.",
"This is because the internal understanding that needs to be built in order to be able to translate from one language to another is much more comprehensive.",
"Unfortunately, computational and memory limitations as of present prevent NMT models from using large word vocabularies, and thus alternatives such as subword units (BPE and morphological segmentations) and characters have been used.",
"Here we study the impact of using different kinds of units on the quality of the resulting representations when used to model morphology, syntax, and semantics.",
"We found that while representations derived from subwords are slightly better for modeling syntax, character-based representations are superior for modeling morphology and are also more robust to noisy input.",
"Recent years have seen the revolution of deep neural networks and the subsequent rise of representation learning based on network-internal activations.",
"Such representations have been shown useful when addressing various problems from fields such as image recognition (He et al., 2016), speech recognition (Bahdanau et al., 2016), and natural language processing (NLP) (Mikolov et al., 2013a).",
"The central idea is that the internal representations trained to solve an NLP task could be useful for other tasks as well.",
"For example, word embeddings learned for a simple word prediction task in context, word2vec-style (Mikolov et al., 2013b), have now become almost obligatory in state-of-the-art NLP models.",
"One issue with such word embeddings is that the resulting representation is context-independent.",
"Recently, it has been shown that huge performance gains can be achieved by contextualizing the representations, so that the same word could have a different embedding in different contexts.",
"This is best achieved by changing the auxiliary task.",
"For example, ELMo (Peters et al., 2018) learns contextualized word embeddings from language modeling (LM) using long short-term memory networks (LSTMs) (Hochreiter and Schmidhuber, 1997).",
"It has been further argued that complex auxiliary tasks such as neural machine translation (NMT) are better tailored for representation learning, as the internal understanding of the input language that needs to be built by the network to be able to translate from one language to another needs to be much more comprehensive compared to what would be needed for a simple word prediction task.",
"This idea is implemented in the seq2seq-based CoVe model (McCann et al., 2017).",
"More recently, the BERT model (Devlin et al., 2019) proposed to use representation from another NMT model, the Transformer, while optimizing for two LM-related auxiliary tasks: ( i ) masked language model and ( ii ) next sentence prediction.",
"Another important aspect of representation learning is the basic unit the model operates on.",
"In word2vec-style embeddings, it is the word, but this does not hold for NMT-based models, as computational and memory limitations, as of present, prevent NMT from using a large vocabulary, typically limiting it to 30-50k words (Wu et al., 2016).",
"This is a severe limitation, as most NLP applications need to handle vocabularies of millions of words, e.g., word2vec (Mikolov et al., 2013b), GloVe (Pennington et al., 2014) and Fast-Text (Mikolov et al., 2018) offer pre-trained embeddings for 3M, 2M, and 2.5M words/phrases.",
"The problem is typically addressed using byte-pair encoding (BPE), where words are segmented into pseudo-word sequences (Sennrich et al., 2016).",
"A less popular solution is to use characters as the basic unit (Chung et al., 2016; Lee et al., 2017), and in the case of morphologically complex languages, yet another alternative is to reduce the vocabulary size by using unsupervised morpheme segmentation (Bradbury and Socher, 2016).",
"The impact of using different units of representation in NMT models has been studied in previous work (Ling et al., 2015; Costa-juss`a and Fonol-losa, 2016; Chung et al., 2016; Lee et al., 2017, among others), but the focus has been exclusively on the quality of the resulting translation output.",
"However, it remains unclear what input and output units should be chosen if we are primarily interested in representation learning.",
"Here, we aim at bridging this gap by evaluating the quality of NMT-derived embeddings originating from units of different granularity when used for modeling morphology, syntax, and semantics (as opposed to end tasks such as sentiment analysis and question answering).",
"Our contributions are as follows: We study the impact of using words vs. characters vs. BPE units vs. morphological segments on the quality of representations learned by NMT models when used to model morphology, syntax, and semantics.",
"We found that while representations derived from morphological segments are better for modeling non-local syntactic and semantic dependencies, character-based ones are superior for morphology and are also more robust to noise.",
"There is also value in combining different representations.",
"Representation analysis aims at demystifying what is learned inside the neural network black-box.",
"This includes analyzing word and sentence embeddings (Adi et al., 2017; Qian et al., 2016b; Ganesh et al., 2017; Conneau et al., 2018, among others), RNN states (Qian et al., 2016a; Shi et al., 2016; Wu and King, 2016; Wang et al., 2017), and NMT representations (Shi et al., 2016; Belinkov et al., 2017a), as applied to morphological (Vy-lomova et al., 2017; Dalvi et al., 2017), semantic (Qian et al., 2016b; Belinkov et al., 2017b) and syntactic (Linzen et al., 2016; Tran et al., 2018; Conneau et al., 2018) tasks.",
"See Belinkov and Glass (2019) for a recent survey.",
"Other studies carried a more fine-grained neuron-level analysis for NMT and LM (Dalvi et al., 2019; Bau et al., 2019; Lakretz et al., 2019).",
"While previous work focused on words, here we compare units of different granularities.",
"Subword translation units aim at reducing the vocabulary size and the out-of-vocabulary (OOV) rate.",
"Researchers have used BPE units (Sennrich et al., 2016), morphological segmentation (Brad-bury and Socher, 2016), characters (Durrani et al., 2014; Lee et al., 2017), and hybrid units (Ling et al., 2015; Costa-juss`a and Fonollosa, 2016) to address the OOV word problem in MT. The choice of translation unit impacts what the network learns.",
"Sennrich (2017) carried a systematic error analysis by comparing subword versus character units and found the latter to be better at handling OOV and transliterations, whereas BPE-based subword units were better at capturing syntactic dependencies.",
"In contrast, here we focus on representation learning, not translation quality.",
"Robustness to noise is an important aspect in machine learning.",
"It has been studied for various models (Szegedy et al., 2014; Goodfellow et al., 2015), including NLP in general (Paper-not et al., 2016; Samanta and Mehta, 2017; Liang et al., 2018; Jia and Liang, 2017; Ebrahimi et al., 2018; Gao et al., 2018), and character-based NMT in particular (Heigold et al., 2018; Belinkov and Bisk, 2018).",
"Unlike this work, we compare robustness to noise for units of different granularity.",
"Moreover, we focus on representation learning rather than on the quality of the translation output.",
"Our methodology is inspired by research on interpreting neural network (NN) models.",
"A typical framework involves extracting feature representations from different components (e.g., en-coder/decoder) of a trained model and then training a classifier to make predictions for an auxiliary task.",
"The performance of the trained classifier is considered to be a proxy for judging the quality of the extracted representations with respect to the particular auxiliary task.",
"Formally, for each input word x i we extract the corresponding LSTM hidden state(s) from each layer of the encoder/decoder.",
"We then concatenate the representations of the layers and use them as a feature vector z i for the auxiliary task.",
"where P ( l | x i ) = exp( l z i ) (cid:80) l (cid:48) exp( l (cid:48) z i ) is the probability that word x i is assigned label l .",
"We learn the weights RD L using gradient descent.",
"Here D is the dimensionality of the latent representations z i and L is the size of the label set for property P .",
"See Section 4 for details.",
"We consider four representation units: words, byte-pair encoding (BPE) units, morphological units, and characters.",
"Table 2 shows an example of each representation unit.",
"BPE splits words into symbols (a symbol is a sequence of characters) and then iteratively replaces the most frequent sequences of symbols with a new merged symbol.",
"In essence, frequent character n -grams merge to form one symbol.",
"The number of merge operations is controlled by a hyper-parameter OP ; a high value of OP means coarse segmentation and a low value means fine-grained segmentation (Saj-jad et al., 2017).",
"For morphologically segmented units , we use an unsupervised morphological segmenter, Morfessor (Smit et al., 2014).",
"Note that although BPE and Morfessor segment words at a similar level of granularity, the segmentation generated by Morfessor is linguistically motivated.",
"For example, it splits the gerund verb shooting into root shoot and the suffix ing .",
"Compare this to the BPE segmentation sho + oting , which has no linguistic connotation.",
"On the extreme, the fully character-level units treat each word as a sequence of characters.",
"Previous work on analyzing NMT representations has been limited to the analysis of word representations only, 1 where there is a one-to-one mapping from input units (words) and their NMT representations (hidden states) to their linguistic annotations (e.g., morphological tags).",
"In the case of subword-based systems, each word may be split into multiple subword units, and each unit has its own representation.",
"It is less trivial to define which representations should be evaluated when predicting a word-level linguistic property such as part of speech.",
"We consider two simple approximations to estimate a word representation from subword units: ( i ) Average : for each source (or target) word, we average the activation values of all the subwords (or characters) comprising it.",
"In the case of a bi-directional encoder, we concatenate the averages from the forward and the backward activations of the encoder on the subwords (or characters) that represent the current word.",
"2 ( ii ) Last : we consider the activation of the last subword (or character) as the representation of the word.",
"For the bi-directional encoder, we concatenate the forward encoder's activation on the last subword unit with the backward encoder's activation on the first subword unit.",
"This formalization allows us to analyze the quality of characterand subword-based representations at the word level via prediction tasks.",
"Such kind of analysis has not been performed before.",
"1 Belinkov et al. (2017a) analyzed representations trained from character CNN models (Kim et al., 2016), but the extracted features were still based on word representations produced by the character CNN.",
"As a result, they could not analyze and compare results for the BPE and character-based models that do not assume segmentation into words.",
"2 One could envision more sophisticated averages, such as weighting via an attention mechanism.",
"We choose three fundamental NLP tasks that serve as a good representative of various properties inherent in a language, ranging from morphology (word structure), syntax (grammar), and semantics (meaning).",
"In particular, we experiment with morphological tagging for German, Czech, Russian, and English, 3 lexical semantic tagging for English and German, and syntactic tagging via CCG supertagging for English.",
"Table 1 shows an example sentence with annotations for each task.",
"The morphological tags capture word structure, the semantic tags reflect lexical semantics, and the syntactic tags (CCG supertags) capture global syntactic information locally, at the lexical level.",
"For example, in Table 1, the morphological tag VBZ for the word receives marks it as a verb in third person, singular, present tense; the semantic tag ENS describes a present simple event category; and the syntactic tag PP/NP ) can be thought of as a function that takes a noun phrase on the right (e.g., the capital of USA ), and returns a prepositional phrase (e.g., in the capital of USA ).",
"Artificial Error Induction Recent studies have shown that small perturbations in the input can cause significant deterioration in the performance of the deep neural networks.",
"Here, we evaluate the robustness of various representations under noisy input conditions.",
"We use corpora of real errors harvested by Belinkov and Bisk (2018).",
"The errors contain a good mix of typos, misspellings, and other kinds of errors.",
"In addition, we created data with synthetic noise.",
"We induced two kinds of errors: ( i ) swap and ( ii ) middle .",
"Swap is a common error, which occurs when neighboring characters are mistakenly swapped, e.g., word wodr .",
"In Middle errors, the order of the first and the last characters of a word are preserved, while the middle characters are randomly shuffled (Rawlinson, 1976), e.g., example eaxmlpe .",
"We corrupt n % words randomly in each test sentence, using swap or middle heuristics, or replace words using real-error corpora.",
"We then re-extract feature vectors for the erroneous words in a sentence and we reevaluate the prediction capability of these embeddings on the linguistic tasks.",
"3 As English is morphologically poor, we use part-of-speech tags for it.",
"We refer to English part-of-speech tags as morphological tags later in the paper in order to keep the terminology consistent.",
"Data and Languages We trained NMT systems for four language pairs: German-English, Czech-English, Russian-English, and English-German, using data made available through two popular machine translation campaigns, namely, WMT (Bojar et al., 2017) and IWSLT (Cettolo et al., 2016).",
"We trained the MT models using a concatenation of the NEWS and the TED training datasets, and we tested on official TED test sets (testsets-11-13) to perform the evaluation using BLEU (Papineni et al., 2002).",
"We trained the morphological classifiers and we tested them on a concatenation of the NEWS and the TED testsets, which were automatically tagged as described in the next paragraph.",
"We trained and evaluated the semantic and the syntactic classifiers on existing annotated corpora.",
"See Table 3 for details about the datasets.",
"Taggers We used RDRPOST (Nguyen et al., 2014) to annotate data for the classifier.",
"For semantic tagging, we used the gold-annotated semantic tags from the Groningen Parallel Meaning Bank (Abzianidze et al., 2017), which were made available by (Bjerva et al., 2016).",
"The tags are grouped into coarse categories such as events, names, time, and logical expressions.",
"There is enough data for English ( 42K), and we randomly sampled the same amount of data we used to train our morphological classifiers to train the semantic classifiers.",
"Yet, only 1,863 annotated sentences (12,783 tokens) were available for German.",
"Thus, in the experiments, we performed 5-fold cross-validation.",
"For CCG supertagging, we used the English CCGBank (Hockenmaier and Steedman, 2007), which contains 41,586/2,407 train/test sentences.",
"4 See Table 3 for more detailed statistics about the train/dev/test datasets we used.",
"MT Systems and Classifiers We used seq2seq-attn (Kim, 2016) to train a two-layer encoder-decoder NMT model based on LSTM representation with attention (Hochreiter and Schmidhuber, 1997) with a bidirectional encoder and a unidirectional decoder.",
"5 We used 500 dimensions for both word embeddings and LSTM states.",
"We trained the systems with SGD for 20 epochs and we used the final model, i.e., the one with the lowest loss on the development dataset, to generate features for the classifier.",
"We trained our neural machine translation models in both *-to-English and English-to-* translation directions, and we analyzed the representations from both the encoder and the decoder .",
"In order to analyze the representations derived from the encoder side, we fixed the decoder side with BPE-based embeddings, and we trained the source side with word/BPE/Morfessor/character units.",
"Similarly, when analyzing the representations from the decoder side, we trained the encoder representation with BPE units, and we varied the decoder side using word/BPE/char units.",
"Our motivation for this setup is that we wanted to analyze the encoder/decoder side representations in isolation, keeping the other half of the network (i.e., the de-coder/encoder) static across different settings.",
"6 4 There are no available CCG banks for the other languages we experiment with, except for a German CCG bank, which is not publicly available (Hockenmaier, 2006).",
"In our experiments, we used 50k BPE operations and we limited the vocabulary of all systems to 50k.",
"Moreover, we trained the word, BPE, Morfessor, and character-based systems with maximum sentence lengths of 80, 100, 100, and 400 units, respectively.",
"For the classification tasks, we used a logistic regression classifier whose input is either the hidden states in the case of the word-based models, or the Last or the Average representations in the case of characterand subword-based models.",
"Since for the bidirectional encoder we concatenate forward and backward states from all layers, this yields 2,000/1,000 dimensions when classifying using the representations from the en-coder/decoder: 500 dimensions 2 layers 2 directions (1 for the decoder, as it is uni-directional).",
"In all cases, we trained the logistic regression classifier for ten epochs.",
"We now present the evaluation results for using representations learned from different input units to predict morphology, semantics, and syntax.",
"For subword and character units, we found the activation of the last subword/character unit of a word to be consistently better than using the average of all activations (See Table 4).",
"Therefore, we report only the results using the Last method, for the remainder of the paper.",
"Figure 1 summarizes the results for predicting morphological tags with representations learned using different units.",
"The character-based representations consistently outperformed other representations on all language pairs, while the word-based ones performed worst.",
"The differences are more significant in the case of languages with relatively complex morphology such as Czech.",
"We see in Figure 1 a difference of up to 14% in favor of character-based representations when compared with word-based ones.",
"The improvement is minimal in the case of English (1.2%), which is a morphologically simpler language.",
"This is also somewhat reflected in the translation quality.",
"We can see in Table 5 that character-based segmentation yielded higher BLEU scores in the case of a morphologically rich language such as Czech, but performed poorly in the case of German, which requires handling long-distance dependencies.",
"Comparing subword units, we found Morfessor to yield much better morphological tagging performance, especially in the case of morphologically rich languages such as Czech and Russian, supposedly due to the Morfessor's linguistically motivated segmentations, which are helpful for learning morphology.",
"We further investigated whether the performance difference between the representations is due to the difference in modeling infrequent and out-of-vocabulary (OOV) words.",
"Table 6 shows the OOV rate for each language, which is higher for morphologically rich languages.",
"Figure 2 shows that the gap between different representations is inversely proportional to the frequency of",
"the word in the training data, and character-based models handle infrequent and OOV words better.",
"Decoder Representations Next, we used the decoder representations from the English-to-* models.",
"We saw similar performance trends as for the encoder-side representations: characters performed best, and words performed worst.",
"Again, the morphological units performed better than the BPE-based units.",
"Comparing encoder representations to decoder representations, it is interesting to see that in several cases the decoder side representations performed better than the encoder side ones, even though the former were trained using a uni-directional LSTM.",
"However, since there is no difference in the general trends between the encoderand the decoder-side representations, below we focus on the encoder-side only.",
"Figure 3a summarizes the experimental results for evaluating representtaion units of the semantic tagging task.",
"For English, the subword (BPE and Morfessor) and the character representations yielded comparable results.",
"However, for German, BPE performed better.",
"This is in contrast with the morphology prediction experiments, where the character representations were consistently better.",
"We will discuss this in more detail in Section 7.",
"The final task we experimented with is CCG super-tagging, which reflects modeling syntactic structure.",
"Here we only have English tags, and thus we evaluate the performance of encoder representations for English German models, trained using words, characters, and subword units.",
"the best overall.",
"Moreover, there is no much difference when using word-based vs. BPE-based representations.",
"The character-based representations lag behind, but the difference in accuracy is small compared to the morphological tagging results.",
"7 It is noteworthy that here character-based representations perform worse than both words and subwords, contrary to their superior performance on morphology.",
"We will return to this in Section 7 below.",
"Next, we evaluated the robustness of the representations with respect to noise.",
"We induced errors in the test sets by corrupting 25% of the words in each sentence using different error types (syn-thetic or real noise), as described in Section 4.",
"We extracted the representations of the noisy test sets and we re-evaluated the classifiers.",
"Figure 4 shows the performance on each task.",
"We can see that characters yielded much better performance on all tasks and for all languages, showing minimal drop in accuracy, in contrast to earlier results where they did not outperform subword units 8 on the task of syntactic tagging.",
"This shows that character representations are more robust to noise.",
"Surprisingly, in a few cases, BPE performed worse than word units, e.g., in the case of syntactic tagging (80.3 vs. 81.1).",
"We found that BPE can segment a noisy word into two or more known subword units that have no real relationship to the actual word.",
"Thus, using representations of wrong subword units could hurt the performance.",
"We further investigated the robustness of each classifier by increasing the percentage of noise in the test data.",
"We found that the difference in representation quality stays constant across BPE and character representations, whereas word representations deteriorate significantly as the amount of noise increases (see Figure 5).",
"Our experiments show a complicated picture, where none of the representations is superior in all scenarios.",
"Characters were found to be better for morphological tagging, BPE was ahead in 7 For perspective, these numbers are above a majority class baseline of 72% and below the state-of-the-art, which is around 94-95% (Kadari et al., 2018; Xu, 2016).",
"the semantic tagging task for German (and about the same in English), and Morfessor units were slightly better for syntax.",
"Syntactic tagging requires knowledge about the complete sentence.",
"Splitting a sentence into characters substantially increases its length: on average from 50 words to 250 single-character tokens.",
"Thus, character-based models struggle to capture long-distance dependencies.",
"Sennrich (2017) also found this to be true in their evaluation based on contrastive translation pairs in German-English.",
"Similarly, in the case of morphological tagging, the information about the morphological structure of a word is dependent on the surrounding words plus some internal information (root, morphemes, etc.) present inside the word.",
"A character-based system has access to all of this information, and thus performs well.",
"Morphological segmentation performed better than BPE for the morphological tagging because its segments are linguistically motivated units (segmented into root + mor-phemes), thus making the information about the word morphology explicit in the representation.",
"In contrast, BPE solely focuses on the frequency of characters occurring together in the corpus, and thus can generate linguistically incorrect units.",
"The variations in performance for different representations suggest that they are learning different aspects of language, which might be complementary.",
"Thus, we tried to combine them.",
"Table 7 summarizes the results for morphological and semantic tagging.",
"9 We can see that combinations involving characters (B+C, W+C in the table) yield 9 We observed similar trends for the other tasks.",
"larger improvement compared to combining word-and BPE-based representations (W+B).",
"However, combining all three performed best for all languages and for all tasks.",
"We connect our findings with recent work on training state-of-the-art embeddings: CoVe (McCann et al., 2017), ELMo (Peters et al., 2018), and BERT (Devlin et al., 2019).",
"Each of these architectures uses different units of representations: e.g., CoVe uses words, BERT is based on subword units, while ELMo focuses on characters.",
"10 We speculate that, although these models yield state-of-the-art results for several tasks, their performance may be suboptimal because of the choice of their underlying representation units.",
"In our experiments above, we have shown that it is possible to achieve potentially better performance when using units of different granularity jointly.",
"We have further shown that the best-performing representation unit is target task-dependent.",
"We believe this would be true for more complex NLP tasks as well.",
"For example, question answering generally requires learning long-range dependencies, and thus embeddings from a character-based model might not be the right choice in this case.",
"Our results show that character-based models are not a viable option for handling long-range dependencies, and subword-based representations might be a better option for such tasks.",
"Table 5 summarizes the translation performance of each system.",
"We can see that in most cases, the subword-based systems perform better than the word-based and the character-based ones.",
"However, this is not true in the case of using their representations as features for a core NLP task as in our experiments above.",
"For example, we have found that character-based representations perform best for the morphological tagging task.",
"On a side note, although BPE-based representations perform better for some tasks, they are sensitive to noise.",
"Their capability of segmenting any OOV word into known subwords may result in less reliable systems.",
"Notably, the translation performance of the BPE-based system can fall below that of the character-based system even in the presence of 10 However, note that ELMo uses character convolutions, which is different from a fully character-based model.",
"We studied the impact of using different representation unitswords, characters, BPE units, and morphological segmentson the representations learned by seq2seq models trained for neural machine translation.",
"In particular, we evaluated the performance of such representations on core natural language processing tasks modeling morphology, syntax, and semantics.",
"Representations derived from subword units are better for modeling syntax.",
"Character-based representations are distinctly better for modeling morphology.",
"Character-based representations are very robust to noise.",
"Using a combination of different representations often works best.",
"Based on our findings, we further conjecture that although subword-based segmentation based on BPE are a de-facto standard when building state-of-the-art NMT systems, the underlying representations they yield are suboptimal for many external tasks.",
"Character-based representations provide a more viable and robust alternative in this regard, followed by morphological segmentation.",
"In future work, we plan to study how different units affect representation quality in non-recurrent models such as the Transformer (Vaswani et al., 2017) as well as in convolutional architectures (Gehring et al., 2017).",
"We would also like to explore representations from robustly trained systems, which should improve performance on noisy input (Belinkov and Bisk, 2018; Heigold et al., 2018).",
"Finally, it would be interesting to study representations in other NLP tasks besides neural machine translation.",
"This work was funded by the QCRI, HBKU, as part of the collaboration with the MIT, CSAIL.",
"Yonatan Belinkov was also partly supported by the Harvard Mind, Brain, and Behavior Initiative."
] | [
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"result",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"Utilizing clinical texts in survival analysis is difficult because they are largely unstructured.",
"Current automatic extraction models fail to capture textual information comprehensively since their labels are limited in scope.",
"Furthermore, they typically require a large amount of data and high-quality expert annotations for training.",
"In this work, we present a novel method of using BERT-based hidden layer representations of clinical texts as covariates for proportional hazards models to predict patient survival outcomes.",
"We show that hidden layers yield notably more accurate predictions than predefined features, outperforming the previous baseline model by 5.7% on average across C-index and time-dependent AUC.",
"We make our work publicly available at https://github.com/bionlplab/ heart_failure_mortality .",
"Survival analysis estimates the expected time until an event of interest occurs (Ranganath et al., 2016).",
"In clinical research, it is used to understand the relationship between prognostic covariates (e.g., age and treatment) and patient survival time for important use cases such as predicting mortality of heart failure patients and providing management recommendations for intensive care units during a public health crisis like the COVID-19 pandemic (Pandey et al., 2020; Sprung et al., 2020; Nielsen et al., 2019).",
"Clinical texts such as radiology reports contain rich information about patients that is used to diagnose disease, plan treatments and monitor progress.",
"It also contains the high-level reasoning of human experts that requires years of knowledge accumulation and professional training (Langlotz, 2015).",
"Despite their clinical relevance, it is challenging to use them in survival analysis since they are largely unstructured.",
"Automatic labelers are often unable to capture detailed information to distinguish between patients, especially ones with similar conditions, because they mostly rely on a small set of manually selected labels (Lao et al., 2017).",
"Developing methods for accessing the critical information embedded in unstructured clinical texts holds the potential to meaningfully benefit clinical research.",
"To bridge this gap, we propose a deep learning method to predict the survival probability of heart failure (HF) patients based on the high-dimensional feature representations of their radiology reports.",
"Concretely, we extract hidden features from the texts with BERT-based (Devlin et al., 2019) models and apply a recurrent neural network (RNN) to model sequences of reports and estimate the log-risk function for the overall mortality prediction.",
"This approach can encapsulate more textual information than hand-crafted features and incorporate higher-order temporal information from report sequences.",
"We find that our model improves on average 5.7% in both C-index and time-dependent AUC without requiring additional expert annotations.",
"We make three contributions through this work: (1) present a novel survival analysis model to leverage feature representations from clinical texts, (2) demonstrate that our model outperforms the ones dependent on predefined expert features and that this approach can generalize across various biomedical and clinical BERT models, and (3) make our work publicly available for reproduction by others.",
"Due to the lack of expert annotations, earlier automatic labelers mostly use predefined linguistic patterns to extract relevant information.",
"NegEx (Chapman et al., 2001) is a regular expression algorithm that identifies observations based on specified phrases.",
"NegBio (Peng et al., 2018) uses universal dependencies and subgraph matching in addition to regular expressions.",
"CheXpert (Irvin et al., 2019) extends NegBio by adding rules to extract, classify, and aggregate mentions to improve performance.",
"While they typically achieve a high precision, they suffer from a low recall because of their limited rules.",
"BERT (Devlin et al., 2019) is a transformer-based method that extracts feature representations of unlabeled text that are effective for transfer learning across various NLP tasks.",
"It is adapted to a wide range of domains, including biomedical and clinical domains (Lee et al., 2020; Alsentzer et al., 2019; Peng et al., 2019).",
"Recently, BERT models have been applied to labeling radiology reports.",
"CheXbert (Smit et al., 2020) and CheX-pert++ (McDermott et al., 2020) train on silver-standard datasets created with a rule-based labeler, CheXpert.",
"Although they outperform rule-based labelers, these approaches need a curated training corpus which can be costly to obtain and error-prone.",
"Furthermore, their labels are still limited and can miss critical information from the reports.",
"Regarding survival analysis, the Cox proportional hazards model (CPH) is widely adopted as it can deal with censored data and evaluate the prognostic values of covariates simultaneously (Cox, 1972).",
"DeepSurv (Katzman et al., 2018) and Deep-Hit (Lee et al., 2018) are more contemporary methods that use deep neural networks to model more complex, nonlinear relationships of predictor variables.",
"RNN-SURV (Giunchiglia et al., 2018) and DRSA (Ren et al., 2019) model time-variant, sequential patterns from predictor variables.",
"To the best of our knowledge, the compatibility of these models and high-dimensional features as covariates has not been tested.",
"Automatic extraction tools enable survival analysis to incorporate textual information from clinical texts.",
"Pandey et al. (2020) used a convolutional neural network to extract findings from radiology reports of heart failure patients and predict all-cause mortality with CPH.",
"Heo et al. (2020) performed stroke prognosis based on the document-level and sentence-level representations of MRI records.",
"Our work extends this line of research by using contextual deep representations of clinical texts to perform survival analysis.",
"We first formulate the survival analysis problem.",
"In the discrete context, we divide the continuous time into disjoint intervals V = ( t l 1 , t l ] where t 0 and t T are the first and last observation interval boundaries.",
"At time t u , the model predicts the survival probability in the prediction window ( t u , t T ] with longitudinal features in the observation window ( t 0 , t u ] (Figure 1).",
"For each participant i , the survival probability at each time t l ( l > u ) is S i ( t l ) = P r ( z > t l ) , where z is the time-to-event, time until death in our case.",
"The hazard rate of the survival probability is i ( t l ) = S i ( t l 1 ) S i ( t l ) S i ( t l ) .",
"Our framework consists of two stages: feature extraction and survival analysis (Figure 1).",
"The input of each time t l is given by the features extracted from the reports of each patient i .",
"In this work, we evaluate two sets of predefined features and hidden features of the reports.",
"The first feature set consists of 14 common radiographic findings in computed tomography (CT) imaging reports (aortic aneurysm, ascites, atelectasis, atherosclerosis, cardiomegaly, enlarged liver, gall bladder wall thickening, hernia, hydronephrosis, lymphadenopathy, pleural effusion, pneumonia, previous surgery, and pulmonary edema).",
"The findings are extracted using the convolutional neural network provided by Pandey et al. (2020) which had the reported performance of 0.90 F1 in average.",
"The second feature set consists of 14 predefined findings in CheXpert (Irvin et al., 2019) which are commonly found in radiology reports (atelectasis, cardiomegaly, consolidation, edema, enlarged car-diomediastinum, fracture, lung lesion, lung opacity, pleural effusion, pleural other, pneumonia, pneumothorax, support devices, and normal).",
"These features are extracted using CheXbert (Smit et al., 2020) with the reported performance of 0.80 F1 in average.",
"specifically, we extract the information before it passes on to the output layer that consists of 14 linear heads.",
"The representations are vectors of size 768.",
"Lastly, we construct sequential deep representations by creating arrays of up to three most recent reports of each patient.",
"As the reports can change over time based on the patient's condition, these features are time-variant and contain temporal information that cannot be obtained by a single report.",
"In addition to CheXbert, we apply BERT variations BERT, BioBert, ClinicalBert and BlueBert.",
"In this study, the hazard rate has the form",
"is a patient's log-risk of failure, X i are covariates representing a patient's variables up to t u , and baseline hazard at t u .",
"For the standard Cox Proportional-Hazards (CPH) model (Cox, 1972), ( X i ) has the form of a linear combination of p covariates 1 X i 1 + + p X ip .",
"In our experiments, the covariates are the features extracted from the reports.",
"can also be a non-linear risk function of a multilayer perceptron (MLP).",
"To this end, our model is the same as DeepSurv (Katzman et al., 2018).",
"Both CPH and DeepSurv cannot incorporate the higher-order temporal information from report sequences.",
"To solve this problem, we define = LST M ( X i ) to model the possible time-variant effects of the covariates leading up to t u (Figure 1).",
"Our model is similar to RNN-SURV (Giunchiglia et al., 2018) and DRSA (Ren et al., 2019).",
"The main difference is that the objective function is the average partial log-likelihood (Kvamme et al., 2019): 1 N (cid:88) i U l ( x i ) log (cid:88) j R l e ( x j ) (2) U l is the set of patients that are deceased or last known to be alive (censored) by time point t l .",
"R l is the set of all live and uncensored patients before t l .",
"N is the total number of deceased patients in the dataset.",
"The dataset (Pandey et al., 2020) is a collection of thoracoabdominal CT reports in English",
"for heart failure patients from the New York-Presbyterian/Weill Cornell Medical Center who were admitted and discharged with billing codes ICD-9 Code 428 or ICD-10 Code I50 from January 2008 and July 2018 (Table 1).",
"It was reviewed by the institutional board and de-identified.",
"We use each patient's three most recent reports or zero vectors for any missing ones.",
"Their time-to-event is calculated as the number of days between the most recent report date and death date if deceased or the last follow-up date if censored.",
"We perform simple preprocessing steps to confirm each patient has at least one report and nonnegative time-to-event.",
"To assess the discriminative accuracies of our models, we use the C-index (Harrell et al., 1982) and time-dependent area-under-the-curve (AUC) (Hea-gerty and Zheng, 2005), some of the most commonly used evaluation metrics in clinical research (Kamarudin et al., 2017; Pencina and D'Agostino, 2004; Uno et al., 2011).",
"Intuitively, the C-index measures the extent to which the model is able to assign logical risk scores.",
"An individual with shorter time-to-event T should have a higher risk score R than the ones with longer time-to-event.",
"Formally, it is defined as: C = (cid:80) i,j I ( T i > T j ) I ( R i < R j ) d j (cid:80) i,j I ( T i > T j ) d j (3) I ( c ) = (cid:40) 1 if c is true 0 else d j = (cid:40) 1 if T j exists 0 else Both C-index and AUC assign a random model 0.5 and a perfect model",
"1. We measure all-time C-index, C-index at 30 days (C-index@30), and AUC at 30 days and 365 days (AUC@30 and AUC@365) to show the models' performances dealing with different time-to-events 1 .",
"We perform a grid search to find the optimal hyper-parameters based on the metrics and use them for all configurations.",
"The learning rate is set to 0.0001 with an Adam optimizer.",
"We iterate the training process for 100 epochs with batch size 256 and early stop if the validation loss does not decrease.",
"The dropout rate is 0.6.",
"We perform five-fold cross-validation to produce 95% confidence intervals for each metric.",
"The training, validation and test splits are 70%, 10%, 20%, respectively.",
"We use pycox and PyTorch to implement the framework 2 .",
"The end-to-end training takes about 30 minutes with NVIDIA Tesla P100 16 GB GPU, mainly due to feature extraction.",
"Table 2 shows our experimental results with variations in covariates and survival analysis models.",
"Our LSTM model with hidden features (LSTM + Hidden Features) achieves the best results (0.709 in C-index), 3.5% and 0.5% improvements over CPH + Hidden Features and MLP + Hidden Features.",
"In contrast to the MLP, its data included the reports from patients' prior visits with more textual and higher-order temporal information.",
"Nonetheless, the improvements are stll marginal, suggesting that the evaluation of the effectiveness of LSTM 2 https://github.com/havakv/pycox in survival analysis in this context would require more empirical evidence, particularly with more longitudinal text data.",
"We observe that the hidden features provide at least 5% improvements over the other feature sets with both CPH and MLP.",
"This indicates that the hidden features capture textual information more thoroughly than the predefined features for survival analysis.",
"We find that Feature Set 2, obtained with CheXbert (Smit et al., 2020), performs significantly better ( > 10% C-index) than Feature Set 1, obtained with the CNN model (Pandey et al., 2020).",
"With both CPH and MLP, Feature Set 1 yields around 0.5 in C-index and AUC, whereas Feature Set 2 shows prognostic value in the 0.62-0.69 range.",
"The difference of the feature sets directly results in the performance difference.",
"While Feature Set 1 and Feature Set 2 have overlapping features (at-electasis, cardiomegaly, pleural effusion, and pneu-monia), Feature Set 1 is not as discriminatory as Feature Set",
"2. This observation informs us that much important textual information with prognostic value is likely lost between the feature sets.",
"Finally, we compare our model on BERT-Base variants.",
"BERT-Base, CheXbert and BlueBert used uncased text.",
"BioBert and ClinicalBert used cased text.",
"BioBert was pretrained on PubMed abstracts.",
"ClinicalBert was initialized with BioBert's weights and further trained on MIMIC-III clinical notes.",
"BlueBert was pretrained on both datasets altogether.",
"Table 3 shows that all BERT variants (except the original BERT) capture the textual information more comprehensively than the predefined features and yield significantly more accurate predictions.",
"Further, the models with more pertinence to radiology reports perform incrementally better.",
"BlueBert outperforms others and improves on CheXbert slightly.",
"This observation is consistent with the findings in (Peng et al., 2019).",
"These results illustrate that using hidden layer representations in survival analysis can generalize across deep learning models based on their areas of focus.",
"Incorporating the textual information of clinical texts in survival analysis is challenging because of their unstructured format.",
"Automatic extraction tools have a small set of features selected by experts and fail to capture the information fully and precisely.",
"We show a novel method of using hidden layer representations of clinical texts as covariates for proportional hazards models.",
"When applied to predicting all-cause mortality of heart failure patients, the results indicate that hidden features encapsulate more comprehensive and effective textual information than predefined features.",
"We plan to explore the use of the attention mechanism to the input sequence and test the generalizability of this method with more datasets.",
"In addition, we plan to gain more insights on how hidden features are influenced (e.g. word choice, text length, etc.) and add value for better prediction as interpretability is highly important in the medical domain.",
"We hope our small contribution provides assistance in the scalable development of accurate predictive models that harness clinical text information.",
"Research reported in this publication was supported by National Library of Medicine National Institutes of Health (NIH) under award number R00LM013001.",
"It was also supported by National Heart, Lung, and Blood Institute of NIH under award number R01HL1276610.",
"This study also received support from NewYork-Presbyterian Hospital (NYPH) and Weill Cornell Medical College (WCMC), including the Clinical and Translational Science Center (CTSC) (UL1TR002384) and Joint Clinical Trials Office (JCTO).",
"Additionally, we would like to thank Dr. Pranav Rajpurkar and Dr. Matthew P Lungren for providing the ChexBert model."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.